Corporate America had gotten such a bad reputation with WW1 war
profiteering, crash of '29, depression and supporting Nazi/Hitler
... that they decided to launch a major propaganda campaign to remake
their image:
In December 1940, as America was emerging from the Great Depression,
more than 5,000 industrialists from across the nation made their
yearly pilgrimage to the Waldorf-Astoria Hotel in New York City,
convening for the annual meeting of the National Association of
Manufacturers.

... snip ...

Note earlier the same year, June1940, Germany had a victory
celebration at the Waldorf-Astoria with major corporations. Lots of
them were there to hear how to do business with the Nazi's, Intrepid:
loc1901-4:
One prominent figure at the German victory celebration was Torkild
Rieber, of Texaco, whose tankers eluded the British blockade. The
company had already been warned, at Roosevelt's instigation, about
violations of the Neutrality Law. But Rieber had set up an elaborate
scheme for shipping oil and petroleum products through neutral ports
in South America. With the Germans now preparing to turn the English
Channel into what Churchill thought would become "river of blood,"
other industrialists were eager to learn from Texaco how to do more
business with Hitler.

... snip ...

John Foster Dulles played major role in rebuilding Germany's economy
and military during 20s&30s. The Brothers: John Foster Dulles, Allen
Dulles, and Their Secret World War,

loc865-68:
In mid-1931 a consortium of American banks, eager to safeguard their
investments in Germany, persuaded the German government to accept a
loan of nearly $500 million to prevent default. Foster was their
agent. His ties to the German government tightened after Hitler took
power at the beginning of 1933 and appointed Foster's old friend
Hjalmar Schacht as minister of economics.

loc873-79:
Sullivan & Cromwell floated the first American bonds issued by the
giant German steelmaker and arms manufacturer Krupp A.G., extended
I.G. Farben's global reach, and fought successfully to block Canada's
effort to restrict the export of steel to German arms makers.

loc905-7:
Foster was stunned by his brother's suggestion that Sullivan &
Cromwell quit Germany. Many of his clients with interests there,
including not just banks but corporations like Standard Oil and
General Electric, wished Sullivan & Cromwell to remain active
regardless of political conditions.

loc938-40:
At least one other senior partner at Sullivan & Cromwell, Eustace
Seligman, was equally disturbed. In October 1939, six weeks after the
Nazi invasion of Poland, he took the extraordinary step of sending
Foster a formal memorandum disavowing what his old friend was saying
about Nazism

... snip ...

From the law of unintended consequences ... when the 1943 US Strategic
Bombing program needed location of military &amp industrial targets in
Germany, it got them from wallstreet.

more disclaimer: I've been somewhat in battle with EMV (chip showing
up on payment cards) since EMV was originally defined. About the same
time EMV was originally done in the mid-90s, I did chip design that
had *NONE* of the vulnerabilities and exploits of EMV (was
significantly more secure and cost significantly less). There was
large EMV pilot done in the US at the turn of the century that was
later described as billions of dollars were spent to prove chips are
less secure than magstripe. I tried to warn about the shortcomings
... but they went ahead anyway ... afterwards all evidence appeared to
disappear w/o a trace ... and speculation that it would be a long time
before it was tried in the US again (it is now 15years later)
... letting all the bugs to be worked out in other countries.
https://en.wikipedia.org/wiki/EMV

vbcoen@GMAIL.COM (Vince Coen) writes:
I think the stats on migration failures show that many fail regardless
of the target migration mainly is that they over estimate project
time, and quality of the target systems being used in place of m/f.

Taking a straight view the mainframe is slow compared to running on
servers on a instruction throughput basis.

What they miss however is the data through put specs compared to
mainframes where the m/f still wins hands down.

I have tried (just for my self) to build a 8 core PC with separate
Sata controllers for each 15000 rpm drive to match up with m/f
performance but apart from the high costs of each controller there is
still the speed or lack of it of going from the controllers to the
application because of bottle necks in the data bus.

I have not seen any PC/server design mobo that gets around this
problem and until they do - the mainframe is still "the man" for data
processing in bulk.

A simple scenario is the financial industry spent billions of dollars
in the 90s to move from "aging" (mainframe) overnight batch
settlement to straight-through processing using large
numbers of parallel "killer micros". A major source of failure was
wide-spread use of industry parallelization libraries ... that had 100
times the overhead of cobol batch. I pointed it out at the time, but
was completely ignored ... the toy demos looked so neat. It wasn't
until they tried to deploy that they ran into the scaleup problems
(the 100 times parallelization overhead total swamped the anticipated
throughput increases using large number of "killer micros" for
straight-through processing).

In the meantime there has been enormous amount of work by the industry
(including IBM) on RDBMS parallizing efficiencies. A RDBMS-based
straight-through processing implementation done more recently easily
demonstration all of the original objectives from the 90s ... but the
financial industry claimed that it would be at least be another decade
before they were ready to try again (lots of executives still bore the
scars from the 90s failures and had become risk adverse).

In 2009, non-mainframe IBM was touting some of these RDBMS
parallelization scaleup efficiencies. I somewhat ridiculed them
... "From The Annals of Release No Software Before Its Time" ... since
I had been working on it 20yrs earlier (and got shutdown, being told I
was not allowed to work on anything with more than four processors).
http://www.garlic.com/~lynn/2009p.html#43 From The Annals of Release No Software Before Its Time
http://www.garlic.com/~lynn/2009p.html#46 From The Annals of Release No Software Before Its Time

Also, in 1980 I got sucked into to do channel extender for STL that
was moving 300 people from the IMS group to off-site bldg. The channel
extender work did lots of optimization to eliminate the enormous
channel protocol chatter latency over the extended link ... resulting
in no appearant difference between local and remote operation. The
vendor then tried to get IBM approval for release of my support
... but there was group in POK working on some serial stuff (and were
afraid if it was in the market, it would make releasing their stuff
more difficult) and managed to get approval blocked. Their stuff is
final released a decade later, when it is already obsolete (as ESCON
with ES/9000). some past posts
http://www.garlic.com/~lynn/submisc.html#channel.extender

In 1988, I was asked to help LLNL standardize some serial stuff they
have, which quickly morphs into fibre-channel standard (including lots
of stuff that I had done from 1980). Later some of the POK engineers
define a heavy weight protocol for fibre-channel that drastically
reduces the native throughput which is eventually released as FICON.
some past posts
http://www.garlic.com/~lynn/submisc.html#ficon

The latest published numbers I have from IBM is peak I/O benchmark for
z196 that used 104 FICON (running over 104 fibre-channel) to get 2M
IOPS. At the same time there was a fibre-channel announced for e5-2600
blade that claimed over million IOPS (two such fibre-channel has greater
native throughput than 104 FICON running over 104 fibre-channel).

In addition, there hasn't been any real CKD manufactured for decades,
CKD is simulated on industry standard fixed-block disks. It is
possible to have high-performance server blades running native
fibre-channel with native fixed-block disks that eliminates the
enormous FICON and CKD simulation inefficiencies.

Related z196 I/O throughput number is all 14 SAPs running at 100% busy
peaks at 2.2M SSCH/sec ... however, they recommend that SAPs are limited
to 75% or 1.5M SSCH/sec.

I have yet to see equivalent numbers published for EC12 or z13. EC12
press has been that z196 @ 50BIPS processing to EC12 @ 75BIPS processing
(50% more processing) only claims 30% more I/O throughput. z13 quote has
been 30% more processing than EC12 (with 40% more processors than EC12).

Note that while fibre-channel wasn't originally designed for mainframe
... but for non-mainframe server configurations (that tend to run a few
thousand), SATA is design point for the $500-$800 PCs. The are a lot of
throughput differences between the consumer $500-$800 PCs and the
non-mainframe server blades that are done for more heavy duty
processing.

crypto trivia, later became involved in link encryptor board that
would do multiple megabytes/sec and built for less than $100. At first
the corporate crypto product people said that it significantly
weakened DES. It took me 3months to figure out how to explain to them
that rather than weakening DES, it significantly increased DES
strength. It turned out to be a hollow victory ... got told I could
make as many as I wanted, but there was only one possible user... they
would all have to be sent to address in Maryland. It was when I first
realized that there are three kinds of crypto: 1) the kind they don't
care about, 2) the kind you can't do, and 3) the kind you can only do
for them.

Part of HSDT was 4.5m satellite dishes at Los Gatos, Yorktown and
Austin ... and then did tail-circuits over the Collins Digital Radio
microwave that ran between Bldg.12 and bldg. 29 in Los Gatos. Old
email to me (copying several other people).
19 June 1985, 14:55:51 PDT
To: wheeler

Re: 1.544 mbps transmission from SJ b/40 to b/29 via microwave

We have facilities in place to carry 1.544 all of the way from b/40 to
b/29. However, format is DS-1, not some kind of clock and data
interface such as V.35 or 449.

One conversion possibility would be to use the little Coastcomm
multiplexes (<$12 K/pr) with a single v.35 or 449 (either one) at
1.536 mbps. The remaining 8 kbps is standard 193rd bit D3 framing
which in this case would allow encryption of the link. If you'd like,
I'll send you a copy of the data sheet on the interface card.

JimP <solosam90@gmail.com> writes:
Republicans have been cutting school budgets for years. Illiteracy is
on the rise.

all sorts of budgets have been cut &/or diverted for decades.

1990 census reported that half the 18yr olds were funtionally illiterate
(and things have continued to decline since then).

budget director in the 80s, reported that they increased SS payments so
they could use it for military spending w/o having to say they increased
income taxes (however, at some point in the future, income taxes will
have to be increased to replenish the SS trust fund; unless they come up
with gimmick to cut SS payments).

Volcker talking to civil engineering professor about money has been
diverted from infrastructure spending for so long ... there aren't jobs,
lack of jobs, students stop taking classes, w/o students, univ. start
shutting down programs and dropping professors ... Confidence Men: Wall
Street, Washington, and the Education of a President pg290
http://www.amazon.com/Confidence-Men-Washington-Education-ebook/dp/B0089LOKKS
Well, I said, 'The trouble with the United States recently is we spent
several decades not producing many civil engineers and producing a huge
number of financial engineers. And the result is s**tty bridges and a
s**tty financial system!'

there were stories about stimulus shovel-ready projects having to hire
chinese civil engineering companies (there weren't a lot since many
states were also siphoning off the funds for other purposes ... but
state employee pension funds have also been looted in several ways)
Estimates are that there is a couple trillion deficit in diverted
infrastructure maintenance spending over the last few decades, that
needs to be made up (total state employee pension fund shortfall is
approaching similar amount).

Trivial case was that part of charges PUC allowed major california
utility company was for brush clearing ... keeping it away from power
lines ... preventing fires. A brush fire that burned some property
started investigation that found the CEO was diverting infrastructure
funds to his "bonus". Cal. energy market was also heavily involved in
the ENRON fraud. posts mentioning ENRON
http://www.garlic.com/~lynn/submisc.html#enron

Another way of looting gov. spending is outsourcing to for-profit
operations .... including one of the presidential candidate platform
items is eliminating for-profit prisons ... which has gotten especially
egregious. a couple recent posts
http://www.garlic.com/~lynn/2015e.html#85 prices, was Western Union envisioned internet functionality
http://www.garlic.com/~lynn/2015g.html#27 OT: efforts to repeal strict public safety laws

the biggest TBTF fines (tens of billions) from the economic mess is for
fraudulent foreclosures ... the money was to be turned over to entities
that would use it for aiding the foreclosure victims ... for-profit
companies were set up by former bank regulators for the purpose ... who
seemed to have siphoned off quite a bit of the funds.

Decimal point character and billions

"Sam Thatch" <st342@gmail.com> writes:
Only if you use a completely silly definition of functionally illiterate.
I just don't believe that half of 18yr olds in 1990 could not read a
street sign which says that a particular street is one way and had
to ask someone else to read that for them, or to tell them what
the label on a DVD was saying the DVD movie title was etc and
had to ask someone in the video store to read it for them.

functionally illiterate was dealing with modern life that was getting
increasingly complex, tax forms, contracts, financial agreements,
mortgages, etc.

there were articles at the time about various institutions trying to
rewrite language to 3rd/4th grade level for adults/workers, foreign auto
makers (putting in plants in the US) were requiring junior college
degree in order to get workers that had high school education.,

1954 RAMAC Prototype

Late 80s, senior disk engineer got talk scheduled at the annual,
world-wide, internal communication group conference supposedly on 3174
performance ... but opened the talk with the statement that the
communication group was going to be responsible for the demise of the
disk division. The issue was the communication group had corporate
strategic ownership of everything that crossed the datacenter walls
and were fiercely fighting off distributed computing and
client/server, trying to preserve their dumb (emulated) terminal
paradigm and install base. The disk division was seeing the results of
data fleeing the datacenter to more distributed computing platforms
with drop in disk sales. The disk division had come with a number of
solutions, but they were constantly being vetoed by the communication
group. Note a few short years later the company goes into the red and
was being re-org'ed into the 13 "baby blues" in preparation for
breaking up the company .. and the disk division is no more
http://www.garlic.com/~lynn/subnetwork.html#terminal

Bldg 14&15 machine rooms, were running round the clock, pre-scheduled,
stand-alone mainframe development device testing. At one point they
had tried to use MVS for testing multiple concurrent devices ... but
MVS had 15min MTBF in that environment. I offered to rewrite operating
system input/output supervisor to make it absolutely bullet proof and
never fail ... so they could do any number of concurrent, on-demand
testing ... greatly improving productivity (I was officially research
in bldg. 28 ... but they let me wander around silicon valley).

Then because they we all running under my software, I would
periodically be called in for any problem to diagnose (most of which
turned out not be mine). They then started insisting that I start
sitting in on conference calls with POK channel engineers. I asked
where was all their own people. They said that the interface issues
with POK channel people use to be handled by their senior disk
engineers ... but so many had left for startups in silicon valley.

trivia: later I wrote an internal (only) report on the effort and
happen to mention the MVS 15min MTBF .... I was told that brought the
wrath of the MVS group down on my head, 1st trying to get me fired,
and then doing whatever they could to make my time at IBM unpleasant
(apparently part of it was what they were reporting up their executive
chain didn't always correspond with reality). As an aside, this is
later old email just before 3380s shipped to customers ... FE had 57
simulated 3380 error regression test and in all cases, MVS was failing
requiring re-IPL ... and in 2/3rds of the cases, there was no
indication of what caused the failure:
http://www.garlic.com/~lynn/2010n.html#email801015

--
virtualization experience starting Jan1968, online at home since Mar1970

Sarbanes-Oxley claims that it would guarantee executives (and
auditors) did jail time for fraudulent financial filings. GAO started
doing reports of fraudulent financial filings (even showing increase
after SOX goes into effect) and nobody doing jailtime. This included
inflated fraudulent financial filings (as part of increasing executive
bonuses) and later restatements.
http://www.gao.gov/products/GAO-06-678
In 2002, GAO reported that the number of restatement announcements due
to financial reporting fraud and/or accounting errors grew
significantly between January 1997 and June 2002, negatively impacting
the restating companies' market capitalization by billions of
dollars. GAO was asked to update key aspects of its 2002 report
(GAO-03-138).

...
As was the case in the 2002 report, a significant portion of SEC's
enforcement activities involved accounting- and auditing-related
issues. Enforcement cases involving financial fraud- and
issuer-reporting issues ranged from about 23 percent of total actions
taken to almost 30 percent in 2005.

Self-service PC

charlesm@MCN.ORG (Charles Mills) writes:
Agreed. I did an HR systems evaluation a few years back (why is a
coder evaluating HR systems? Don't ask.) and all were big on
"self-service," by which they meant if an employee, for example,
wanted to know how many vacation days s/he had in the bank, s/he did
not have to call HR, s/he just signed onto the HR system with a Web
browser (and with "role-based authority" much lower than an HR person)
and looked.

20yrs ago it was webifying callcenter menu screens ... had to have
computerized-based authentication front-end and restricting access to
information just for the authenticated entity. it has been 20yrs of
reducing callcenter use (not having real person at the other end).

slight topic drift ... 20yrs ago, consumer dailup online banking
operations were making presentations at financial conferences on
motivation for moving to the internet; primarily development&support
costs for proprietary modem drivers (at the time >60 drivers were
typical) and dialup infrastructure, enormous support costs associated
with serial-port modems, etc ... all gets offloaded to ISP. Note at the
same time, the commercial dialup online banking operations were saying
that they would *NEVER* move to the internet because of a long list of
exploits and vulnerabilities (many that presist to this day) ... as an
aside, the commercial dialup online banking operations have subsequently
moved to the internet anyway.

self-service PCs in the past were typically associated with "kiosk",
library, etc, public PCs that anybody can walk up to (like store
machines looking for stock &/or price check) ... as opposed to the
webifying callcenter operations.

at the time, it was much more oriented towards simplifying security
office handing out fine-grain access ... and codifying multi-party
operations as countermeasure to insider threats (no single person had
sufficient authority to complete any high-value operation).

--
virtualization experience starting Jan1968, online at home since Mar1970

Decimal point character and billions

Michael Black <et472@ncf.ca> writes:
That's the thing, "illiterate" doesnt' have to mean "lacking
intelligence". There's an assumption that someone is a failure, when
it may be the situation that's the failure. So they can get by, and
live sometimes quite successfully, by putting a lot of effort into
hiding the fact that they can't read, and creating workarounds to not
being able to read.

five levels
https://www.ets.org/literacy/scores/
Minimum Proficiency

Individuals performing at Level 3 are able to integrate information from
relatively long or dense text or from documents. They are considered to
possess the minimum level of literacy skills to function successfully in
today's society.

... level 1&2 account for almost half the population ... level 3 minimum
functionally literate ... which results in level 1&2 being described as
functionally illiterate.

the legacy of Seymour Cray

RS Wood <rsw@therandymon.com> writes:
Before Steve Jobs, there was Seymour Cray – father of the
supercomputer and regarded as something close to a God in the circles
he moved in.
Jobs' Apple Computer is reputed to have bought one of Seymour's
massive machines back in the day: a Cray, to design the brand-new
Macintosh personal computer.

former co-worker at IBM, left and for a time in the mid-80s had job
programming the Cray for Apple. It had a cray 100mbyte channel attached
high resolution display ... used to simulate screens & response times
(studying human factors).

other trivia ... former co-worker was also member of san jose astronomy
club and told stories of lucas bringing early starwar drafts for the
members to review.

more trivia ... my brother was regional apple marketing rep (largest
conus physical area region) and i would sometimes get invited to
business dinners and even argue macintosh design with mac engineers
(before it was announced).

the legacy of Seymour Cray

"Joe Morris" <j.c.morris@verizon.net> writes:
The museum is free and open to the public, and while it's at the NSA it's an
uncontrolled area outside the security perimeter, with a big sign just
inside the front door reminding visitors to put their badges away. Just be
sure to follow the directions to get there; a wrong turn would put you into
the NSA security entry queue.

The building isn't particularly large, so the exhibits represent only a part
of what the museum owns, but the staff there is familiar with a lot of what
isn't being displayed.

the 1st time I visited, they had MLS (multi-level security) display
(next to STK tape library) ... i tried to con them into letting me have
copy of the MLS video ... I had some thot of doing voice over parody of
MLS.

Google has been doing something wierd recently (coming back with nothing
found) ... i tried search on the nsa museum folklore and it came back
with nothing found. I tried the same search on other search engines
... and they came back with with loads of NSA related references
... including the above that i cited.

--
virtualization experience starting Jan1968, online at home since Mar1970

Did 1st EMV transaction

Did my 1st EMV transaction today ... and it is at least as slow as the
original spec. 20-some years ago ... which was one of the reasons I
got asked to do chip/protocol much more secure and higher integrity
than EMV that was low power and superfast that could meet contactless
transit turnstyle requirements. Reference to demo/booth at 1999
worldwide retail banking show
http://www.garlic.com/~lynn/99.html#217http://www.garlic.com/~lynn/99.html#224

There was large scale US Pilot of EMV at the turn of the century
... but it was in the Yes Card period .... which then went ahead and
did anyway despite warnings. In the aftermath all evidence of the
pilot appeared to disappear w/o a trace and there was speculation that
it would be a long time before it was tried again in the US (letting
bugs be worked out in other jurisdictions). At the bottom of this trip
report (gone 404 but lives on at the wayback machine) there is
discussion of Yes Card at cartes2002:
http://web.archive.org/web/20030417083810/http://www.smartcard.co.uk/resources/articles/cartes2002.html

There was also presentation about the exploits by federal LEO at ATM
Integrity Taskforce meeting prompting somebody in the audience to
exclaim that "they" managed to spend billions of collars to prove
chipcards are less secure than magstripe

the legacy of Seymour Cray

hancock4 writes:
In his memoir, Tom Watson wrote about his frustration that CDC beat
IBM to the market with a powerful high end computer. Watson said that
later he realized that such high end computers were specialty items,
like expensive limited edition high performance sports cars, and that
IBM should focus more on the conventional market.

As we know, IBM lost serious money on STRETCH, although the R&D for
STRETCH contributed greatly to later IBM work. I wonder if IBM's
other high end machines, like the 85, 91, 95, and 195, also lost money
due to a limited customer base of those wanting fast floating point
arithmetic. Due to limited demand, my guess is that those high end
machines were largely hand-built.

The IBM S/360 history generally does not tell how many machines of
each model were sold, nor cost and revenues of them. On the other
hand, the R&D for the high end machines may have contributed toward
later developments, as did STRETCH. (However, some may have pushed
SLT to the limit, which was replaced by monolithic circuits.)

ACS END
http://people.cs.clemson.edu/~mark/acs_end.html
As the quote above indicates, the ACS-1 design was very much an
out-of-the-ordinary design for IBM in the latter part of the 1960s. In
his book, Data Processing Technology and Economics, Montgomery Phister,
Jr., reports that as of 1968:

Of the general-purpose systems having the largest fraction of total
installed value, the IBM S/360 Model 30 was ranked first with 12%
(rising to 17% in 1969). The S/360 Model 40 was ranked second with 11%
(rising to almost 15% in 1970). [Figs. 2.10.4 and 2.10.5]

Of the number of operations per second in use, the IBM S/360 Model 65
ranked first with 23%. The Univac 1108 ranked second with slightly over
14%, and the CDC 6600 ranked third with 10%. [Figs. 2.10.6 and 2.10.7]

... snip ...

I mentioned while still undergraduate, summer of '69 i was brought into
Boeing to help with setting up Boeing Computer Services (BCS)
... consolidate all data processing in independent business unit to
better monetize the investment (a little like cloud computing today).

At the time, I thot the renton datacenter was possibly largest in the
world ... all that summer, 360/65s were arriving in renton faster than
they could be installed. Claim was Renton datacenter had something like
$300M (in '69 dollars) of IBM equipment.

Modern computer brochures; military security; then and now ?

Michael Black <et472@ncf.ca> writes:
For some things, the military and/or governments expect multiple
sources. They don't want to rely on a single source. So there's that.
A lot of semiconductors were second sourced, including
microprocessors.

major uptic of outsorcing and no-bid contracts last decade ... not just
military industrial complex ... but also other agencies.

How Private Contractors Have Created a Shadow NSA; A new cybersecurity
elite moves between government and private practice, taking state
secrets with them (also references oil rig company that was transformed
into one of the largest defense contractors after former SECDEF and
future VP becomes CEO, including no-bid contracts in Iraq)
http://www.thenation.com/article/how-private-contractors-have-created-shadow-nsa/

Head of IBM in the 90s, had been involved in the privaty-equity
industry, and after leaving IBM goes to head up one of the largest
private equity companies ... which then does LBO (private equity)
take-over of company that employs Snowden
http://www.investingdaily.com/17693/spies-like-us/
Private contractors like Booz Allen now reportedly garner 70 percent of
the annual $80 billion intelligence budget and supply more than half of
the available manpower. They're not going away any time soon unless the
CIA and NSA want to start over and with some off-the-shelf laptops,
networked by the Geek Squad from Best Buy. Security clearances used to
be a government function too, but are now a profit center for various
private-equity subsidiaries.

... snip ...

private-equity victim companies are frequently under heavy pressure to
turn revenue every way possible ... the private-equity, for-profit
companies doing outsourced security clearances ... were just filling out
the paperwork ... not bothering to do the (expensive) background checks.

This goes into locking in congressional votes for DOD weapons program
(in this case F22) ... which includes parceling out bits & pieces to
every congressional district ... significantly impacting manufacturing
quality
http://nypost.com/2009/07/17/cant-fly-wont-die/

but then they allowed the F22 "to die" when they managed to replace it
with the much more expensive F35 weapons program (using the same
methodology).

and from the navy
http://www.seapowermagazine.org/stories/20151001-mccain.html

IBM bid triple-redundant rs/6000 running custom software. application
implementors were told that they didn't have to worry about outages &/or
failures and recovery ... because system software would mask all (FAA
hardware) failures.

turns out that review of FAA failure modes found some number at the
application/business (flight control) level ... wasn't simple matter of
low-level hardware outages ... but more complex FAA operational
.... which required rework of the application level design and
implementation ... which never completed.

before leaving ibm in 1992, we would periodically visit the technical
assistant to Federal System Division president ... he was doing double
duty as TA 1st shift, and spent 2nd shift programming ADA for the FAA
effort.

IBM goes into the red in 1992 and was being reorganized into
the 13 Baby Blues in preparation for breaking up the company.
While the board brought in a new CEO to resurrect the company
and reverse the breakup ... posts
http://www.garlic.com/~lynn/submisc.html#gerstner

there were still major changes. The Federal Systems Division (which
was responsible for the FAA modernization contract) was sold off to
Loral
http://www.nytimes.com/1993/12/14/business/ibm-to-sell-its-military-unit-to-loral.html
Mr. Schwartz also said he regarded Federal Systems' air-traffic
control software as a "hidden asset." Federal Systems is currently
leading an overhaul of the F.A.A. system, a project plagued by cost
overruns. Indeed, as the I.B.M.-Loral deal was being announced
yesterday, the F.A.A.'s chief, David Hinson, ordered a review of the
overhaul project.

The new air-traffic control system, whose cost was estimated at $2.5
billion when it was planned in 1983, is now expected to cost more than
$5 billion. But Mr. Schwartz predicted that the Federal Systems
technology would not only satisfy the F.A.A. but also find markets
abroad.

This may be pure obfuscation and misdirection ... since he was major
part of it at the time. There have been recent claims that statute of
limitation has now expired for many of the crimes ... so it is "safe"
to make all sort of claims. In the middle of too big to fail (too
big to prosecute and too big to jail). he was behind most of the
bailout (and fought long legal battle to prevent disclosing what was
really going on). There was only $700B appropriated for TARP
supposedly to buy TBTF toxic assets ... but just the four largest TBTF
were still holding $5.2T "off-book" at the end of 2008. It was the FED
that was providing tens of trillions in ZIRP funds (estimate that TBTF
are clearing $300B/annum) and buying trillions in the off-book toxic
assets at 98cents on the dollar.

Shortly after the FED was forced to public disclose what it was doing,
Bernanke had press conference where he said that he had expected that
the TBTF would use the ZIRP funds to help mainstreet, but when they
didn't he had no way to force them (but that didn't stop the flow of
ZIRP funds). Note also, presumably Bernanke was chosen in part because
he was a depression era scholar ... where the FED had tried something
similar with the same results (so Bernanke should have had no
expectations that they would do anything differently).
http://www.garlic.com/~lynn/submisc.html#bernanke

--
virtualization experience starting Jan1968, online at home since Mar1970

the legacy of Seymour Cray

hancock4 writes:
I think the 370 clone vendors, e.g. Amdahl, Hitachi, etc, sold a fair
amount of machines. My guess the problem was economies of scale,
making production costs high. Also, I think once the clones came out,
IBM had fully unbundled, so one still had to buy system software
regardless of who made the hardware. The book by Campbell-Kelly says
IBM makes a lot of money from renting CICS, for example.

2012 numbers was only about 4% of IBM revenue was from mainframe
processors ... but mainframe division accounted for total 25% of IBM
revenue (and 40% of profit) ... mainframe software and services.

processor revenue seems to have dropped off since then ... but software
and services continue to be quite big part of revenue.

the legacy of Seymour Cray

hancock4 writes:
Students had the lowest priority in running jobs; administrative and
research work took predecence.

Commuter students were at a disadvantage as they needed to run their
jobs during the day when the machine was busiest. Resident students
would come back in the evening when the machine was less busy.

I believe Mr. Wheeler indicated they discovered that the overhead in
setting up a simple student job was much longer than the job itself,
and very inefficient when tons of students were running jobs. I think
he said he did an OS modification to resolve that. Some classes
'batched' student jobs together as multiple job sets in a single job
which improved efficiency.

One of the difficulties back then was waiting for your printout to be
removed from the printer and delivered to your bin. A university
computer room could have a large multidue of bins and many jobs,
keeping the print operator very busy. It was frustrating to know your
job had printed, but was waiting for the print operator to deliver it.

At our university, each student was given a budget to run their jobs;
if you ran too many jobs to get your homework done, you ran out of
money, and that was a problem. Again, resident students had an
advantage over commuters in that they could do their work during
evening or weekend shifts when rates were lower.

360/65 (actually 360/67 running w/o virtual memory turned on) with
os/360 MFT ... using standard 3step Fort G compile, linkedit & go ...
started out well over a minute elapsed time ... had run in less than
second on 709 with IBSYS tape-to-tape.

Adding HASP cut 3step FORTGCLG to a little over half minute. I then did
a "SYSGEN" where I took apart the STAGE2 output from STAGE1 system
... and very carfully reorganized everything so that execution order
would carefully place files & PDS members optimizing 2314 arm seek
motion & PDS member multi-track search ... cutting time to 12.9 seconds
(for null RETURN/END fortrans compile link-edit & go ... effectively all
3-step scheduler processing overhead) ... nearly three times speedup.

above also includes timing for running the os/360 job stream under
(virtual machine) cp/67 after I had rewrote large sections and cut CP67
overhead from 856-322=534seconds to 435-322=113seconds (some parts of
CP67, I did nearly 100 times speedup).

It wasn't until we installed WATFOR at the univ. that student job times
got better than 709 elapsed time. WATFOR was single step fortran
compile&execute that would batch whole sequence of student jobs in one
OS/360 execution (more analogous to 709 IBSYS tape-to-tape monitor).
Typically, would wait until accumulated a card tray of student jobs and
then place OS/360 WATFOR control cards on the front and run it as single
OS/360 job step. Typical student job had nearly null execution time so
nearly all was compile time ... I have some vague memory that WATFOR
compiled at 20,000 statements/cpu-min on 360/65.

rationality

simon@twoplaces.co.uk (Simon Turner) writes:
A friend claims that there's no such thing as pure altruism: people who
do apparently altruistic things are doing it for their own benefit,
either because of the warm glow it gives them, or because at some level
they will indirectly benefit from their act. I'm not sure I agree, but
it's an interesting idea.

there have been some recent studies trying to differentiate what
dominates in natural selection ... individual survival (selfishness) or
group survival (altruism).

some of this was investigations in the aftermath of economic mess last
decade that wallstreet attracts high percentage of sociopaths that view
everybody else as prey/victims ... and frequently also tend to lack
sense of consequences for their actions (behavior to achieve immediate
advantage ... reguardless of what may follow later).

at the congressional Madoff hearings they had the person that tried
unsuccessfully for a decade to get SEC to do something about Madoff
(SEC's hands were forced when Madoff turned himself in). TV business
programs tried to get him for interview ... but all they got was his
lawyer. The lawyer made reference that the person didn't want to
appear in public ... that one of the scenarios why SEC never bothered
to anything about Madoff for over a decade was that Madoff was
involved with violent criminal elements that also controlled SEC
... and he was afraid that they might want retribution for his
attempts to shutdown the operation. A year later on a book tour, the
person was available for public interviews and asked about his not
appearing in public a year earlier. He referenced that he had changed
his mind, that Madoff had possibly swindled some violent criminal
elements and turned himself in looking for government protection (but
there was no explanation regarding SEC's inaction).

the legacy of Seymour Cray

Jon Elson <elson@pico-systems.com> writes:
I will say that the way IBM did disk I/O was WAY more efficient than any
other system I was familiar with at the time. Records were read off the
disk (or tape) directly into the user's buffer, not copied several times
during record unpacking from an OS buffer to the user's buffer.

IBM allowed channel programs to be built by application library ...
and read/written directly into application address space ... either
directly in application buffers or in buffers managed in application
space managed by library code running in application space.

However, original 360 design point was very limited real storage ...
so trade-off was to do lots of stuff with index on disks and
multi-track search (trading off real storage for index versus i/o
channel for sequential index search) ... aka CKD dasd.

However, by mid-70s this trade-off started to flip ... significant
increase in real storage and i/o capacity was becoming major
bottleneck. VTOC was disk volume directory of files. Multi-track
search would serially scan VTOC for file entry to be opened/used
... VTOC entry contained file location on disk (and other
misc. information). VTOC scan could involve serveral disk revolutions
... tieing up disk, controller and channel (not being able to use for
any other purpose).

The other major use was for "PDS" libraries ... nearly all executables
and several other uses. A standard PDS library could involve a 2-3
cylinder directory. A multi-track search would sequentially scan all
tracks until desired PDS member entry was found.

I've periodically mention getting called into datacenter for large
national retailer that was experiencing significant performance
degradation with their store online transaction system under heavy
load. They had several 370/168-3 and large shared CKD 3330 DASD pool
dedicated to running store online transactions under MVS . All the
national experts had been called in before resorting to calling me. I
was brought into class room where all the tables were covered with
foot high stacks of performance reports. After about 30 mins I started
to notice correlation that the aggregate I/O activity across all the
168-3 processors for a specific volume was peaking at 7 I/Os per
second at peak load ... which was a little unusual since 3330 was
nominal considered to have throughput of 30-40 I/Os per second.

It turned out that 3330 contained the PDS online transaction library
for all 168 systems and all stores. It had 3cyl PDS library directory
... and each transaction required reading the library directory to
find the transaction and then load it. On the avg. PDS search is 1.5
cylinders ... or for 3330 that is one multi-track search of
19cylenders taking 19 revolutions at 3600RPM taking .317seconds
elapsed time ... followed by multi-track search of 9.5 cylinders
taking 9.5 revolutions or .158 seconds ... each transaction requires
avg. of .475 seconds just to find the location of the transaction
executable to be loaded (during which time the device, controller and
channel is busy and can't be used for anything else). The actual load
takes around .03 seconds ... but with the PDS lookup time, the whole
complex was limited to a maximum of two transactions per second
... across all systems and for all stores in the country.

The resolution was to split the online executable library into three
different parts (so avg. PDS lookup is around 9.5 tracks instead of
28.5 tracks) .... and then replicate the library so there is unique
copy for each 168-3 system. That up the transaction (load) rate to
10-15 transaction per system and 40-60 per second for the whole
complex.

The IBM disk division did fixed-block (FBA) disks starting in the late
70s (3310 & 3370) ... but I was told by the MVS people that even
if I provided them with fully tested & integrated MVS FBA support,
I would still need an incremental $26M profit (for documentation and
education) ... basically $200M-$300M new disk sales ... they then
qualified it by saying customers were buying disks as fast as they
could be made ... and if there was FBA support, customers would just
switch from buying CKD disks to same amount of FBA disks. However,
part of FBA support includes moving off the enormously expensive
mid-60s multi-track search design. As an aside, no real CKD devices
have been made for decades, being simulated on industry standard
fixed-block disks. Some past posts
http://www.garlic.com/~lynn/submain.html#dasd

Aother problem is that channel program use "real addresses" but with
os/360 move to MVS ... the application library building channel
programs is now running in virtual address space and all the resulting
channel programs are built with virtual addresses. Standard OS/360
& MVS convention passes the address of the channel program in
EXCP/SVC0 supervisor call. With OS/360 move to virtual address, the
EXCP process now has to build a copy of the passed channel program
substituting real addresses for the virtual addresses. This turns out
to be the same exact process that (virtual machine) CP67 had to
do for running guest operating systems in virtual address space. It
turns out the person doing the OS/360 initial prototype move to
virtual memory, borrowed the CP67 "CCWTRANS" routine and crafted into
the side of EXCP processing.

The folklore at the time was the president was going to veto
GLBA. President of AMEX was in competition to be the next CEO of AMEX
and wins. The looser leaves and takes their protegee to Baltimore,
acquiring what was described as loan sharking business. They make some
number of other acquisitions, eventually acquiring Citi in violation
of Glass-Steagall. Greenspan gives them an exemption while they lobby
congress for repeal of Glass-Steagall. They enlist several in DC,
including the secretary of treasury (and former head of G-S). GLBA
initially passes along party lines (54-44) ... but with prospect of
Clinton veto ... they go back and add provisions needed to get
veto-proof vote (90-8)
https://en.wikipedia.org/wiki/Gramm%E2%80%93Leach%E2%80%93Bliley_Act

Other trivia, initially on the floor of congress, the primary purpose
of GLBA was described as keeping new competition out of banking, "if
you already have a bank charter you get to keep it, if you don't
already have a bank charter you can't get one", especially mentioning
entities that would use technology to significantly increase
efficiency) before they start adding other provisions, including
repeal of Glass-Steagall.

Disclaimer: Jan2009, I'm asked to HTML'ize the Pecora Hearings (30s
Senate hearings into crash of '29, resulted in criminal convictions
and Glass-Steagall) with lots of internal cross-links and URLs between
what happened this time and what happened then (reference that maybe
the new congress would have appetite to do something). I work on it
for a time and then get a call saying that it wouldn't needed after
all (reference to enormous piles of wallstreet money totally burying
Washington DC).

Securitized mortgages had been used during the S&L crisis to obfuscate
fraudulent mortgages (posterchild were office bldgs in Dallas/Ft.Worth
area that turned out to be empty lots). In the late 90s, I was asked
to look at improving the integrity of supporting documents as
countermeasures. Then loan originators were securitizing
loans&mortgages and paying for triple-A ratings (when both the sellers
and the rating agencies knew they weren't worth triple-A, from Oct2008
congressional testimony). Triple-A rating trumps supporting
documentation and they can start doing no-documentation liar
loans. Being able to pay for triple-A eliminated any reason for loan
originators to care about borrowers' qualifications or loan quality,
they could sell off (all loans as fast as they could be made) to
customers restricted to dealing in "safe" investments (like large
pension funds, claim is it accounts for 30% loss in funds and
trillions shortfall for pensions), largely enabling being able to do
over $27T 2001-2008
Evil Wall Street Exports Boomed With 'Fools' Born to Buy Debt
http://www.bloomberg.com/apps/news?pid=newsarchive&refer=home&sid=a0jln3.CSS6c

In the wake of ENRON, congress passes Sarbanes-Oxley claiming that it
would prevent future ENRONs and guarantee that executives and auditors
responsible for fraudulent financial filings, do jail time. however it
requires SEC to do something. Possibly because even GAO doesn't
believe SEC is doing anything, it starts doing reports of fraudulent
financial filings, even showing uptic after Sarbanes-Oxley goes into
effect (and nobody doing jailtime). Less well known, SOX also has
provision for SEC to do something about rating agencies (played the
pivotal role in the economic mess). #4 on times list of those
responsible for economic mess
http://content.time.com/time/specials/packages/article/0,28804,1877351_1877350_1877330,00.html

If being able to pay for triple-A, eliminating any concern about loan
quality wasn't bad enough .. They start designing toxic CDOs to fail,
pay for triple-A, sell to their victim customers, and then take out
derivative CDS gambling bets that they would fail (creating enormous
demand for bad loans ... change from just not caring about loan
quality). The sec. of treasury last decade (also former head of G-S)
then pushes for TARP funds supposedly to bail out TBTF by buying toxic
assets. However only $700B is appropriated and just the four largest
TBTF where holding $5.2T "off-book" the end of 2008.

The largest holder of the CDS gambling bets was AIG and was
negotiating to pay of at 50cents on the dollar, when the sec. of
treasury steps in, forces AIG to sign document that they can't sue
those making the gambling bets and take TARP funds to pay off at
100cents on the dollar. The largest recipient of TARP funds is AIG and
the largest recipient of face-value payoffs is the TBTF formally
headed by sec. of treasury.

Note also in the testimony at the Oct2008 congressional hearings into
the role that the rating agencies played, they also said that the
rating agency business model became misaligned and incented to do the
wrong thing when they switched from buyers paying for the rating (as
accurate as possible in the interest of the buyer) to the seller
paying for the rating (rating becomes whatever the seller is willing
to pay for) ... that regulation becomes enormously more difficult when
the business is motivated to do the wrong thing.

Things started with SEC failing to do anything about the rating
agencies ... and sellers no longer needed to care about loan quality
because they could pay for triple-A and sell of every loan as fast as
they could make them. Then things got enormously worse because CFTC
wasn't allowed to regulate gambling derivatives/CDS ... They would
design securitized mortgages to fail (requiring enormous numbers of
bad mortgages), pay for triple-A, sell to their victim customers, and
then take out derivative/CDS gambling bets that they would fail. Then
sec. of treasury steps in blocks taken legal action against those
running the CDS gambling scam

However repeal of Glass-Steagall was important for 1) banks play in
the high risk games and then 2) the pure high risk investment banks
also getting the (real) bailouts, 3) enables too big to fail, too
big to prosecute, too big to jail. The FEDs fought hard, long legal
battle to prevent public disclosure of what they were doing behind the
scenes ... tens of trillions in ZIRP funds and buying trillions in
offbook toxic assets at 98cents on the dollar. TBTF to have FED
bailout required banking charters ... repeal of Glass-Steagall allowed
existing regulated banks to play in the high risk activities ... but
still get bailout when the high risk stuff goes belly up ... four
largest TBTF playing high risk (holding $5.2T toxic assets end of
2008) were Citi, JPMorgan/Chase, Wells Fargo and BofA. However as part
of bailout, FED hands out banking charters to the pure high risk
investment banks (like G-S) making them also eligible for FED feeding
trough (in theory this should have violated the original purpose of
GLBA .. which later added repeal of Glass-Steagall).

Same person primarily responsible for GLBA and repeal of
Glass-Steagall ... Then is responsible (along with his wife) with
blocking CFTC from regulating the gambling derivatives/CDS ... which
is where AIG comes in

Glass-Steagall prevented "safe", regulated, federally backed
depository institutions from engaging in risky activity. Repeal of
Glass-Steagall allowed them to get into enormously risky gambling
activities ... where they get to keep the winnings ... but the tax
payers are on the hook for the losses.

The four largest TBTF engaged in the risky gambling activity were
Citi, JPGMorgan/Chase, Wells Fargo, and BofA. FED Chairman (1st
greenspan and then bernanke) allowed them to carry their risky
gambling (would have been fraudulent under glass-steagall) "off
the books". End of 2008 just those four were still carrying $5.2T in
toxic assets "off-book" ... of the total $27+T done 2001-2008. The
summer and fall of 2008, several tens of billions in these toxic
assets had gone for 22cents on the dollar. If the TBTF had been forced
to bring their risky gambling off-book toxic assets back onto the
books, they would have been declared insolvent and forced to be
liquidated.

Repeal of Glass-Steagall allowing the big regulated financial
institutions to play .... enormously increased the amount of activity
... possibly by order of magnitude (major portion of the over $27T
done 2001-2008). They would have been the first to crash if it wasn't
for them being allowed to carry the activity "off the books" (just the
four largest still with $5.2T end of 2008) ... and FED backing them
up. The other guys were doing tens of billions not trillions.

I think Lehman was playing with something like $50B that it was moving
off books .... compared to over $1.2T-$1.5T for each of the four
largest TBTF ... aka each of the four largest TBTF were still in it to
25-30 times that of Lehman at the end of 2008

Minor note ... two different uses of "G-S" ... one for
glass-steagall ... and another for "Goldman Sachs" ... former
head of Goldman Sachs was secretary of treasury and major player
getting Glass Steagall repealed ... initially on behalf of
CITI. Another former head of Goldman Sachs was secretary of treasury
and major force behind TARP ... and forcing AIG to take TARP funds to
pay off gambling CDS at face value. Gave rise to the joke that
dept. of treasury had become Goldman Sachs branch office in washington
dc.

Four largest TBTF "banks" still carrying $5.2T "off-book" end of 2008
... over 100 times that of Lehman ... allowed to play by different
rules
Bank's Hidden Junk Menaces $1 Trillion Purge
http://www.bloomberg.com/apps/news?pid=newsarchive&sid=akv_p6LBNIdw&refer=home

Later a son, presides over the economic mess, 70 times larger than
S&L crisis. In the S&L crisis there were 30,000 criminal
referrals and 1,000 criminal convictions ... there have been *ZERO*
criminal referrals/convictions for the economic mess (proportionally,
there should have been 70,000 criminal convictions). If the economic
mess had been limited to 1/100th as big, it would have only been
slightly over half the problem of the S&L crisis.

--
virtualization experience starting Jan1968, online at home since Mar1970

When the secretary of treasury helps CEO of CITI to get the ball rolling to repeal
Glass-Steagall and then resigns to join CITI, he is replaced by
one of his protegees ... who also shows up here ... is Harvard
responsible for the rise of Putin?:

John Foster Dulles played major role in rebuilding Germany's economy
and military during 20s&30s. The Brothers: John Foster Dulles, Allen
Dulles, and Their Secret World War,

loc865-68:
In mid-1931 a consortium of American banks, eager to safeguard their
investments in Germany, persuaded the German government to accept a
loan of nearly $500 million to prevent default. Foster was their
agent. His ties to the German government tightened after Hitler took
power at the beginning of 1933 and appointed Foster's old friend
Hjalmar Schacht as minister of economics.

loc873-79:
Sullivan & Cromwell floated the first American bonds issued by the
giant German steelmaker and arms manufacturer Krupp A.G., extended
I.G. Farben's global reach, and fought successfully to block Canada's
effort to restrict the export of steel to German arms makers.

loc905-7:
Foster was stunned by his brother's suggestion that Sullivan &
Cromwell quit Germany. Many of his clients with interests there,
including not just banks but corporations like Standard Oil and
General Electric, wished Sullivan & Cromwell to remain active
regardless of political conditions.

loc938-40:
At least one other senior partner at Sullivan & Cromwell, Eustace
Seligman, was equally disturbed. In October 1939, six weeks after the
Nazi invasion of Poland, he took the extraordinary step of sending
Foster a formal memorandum disavowing what his old friend was saying
about Nazism

From the law of unintended consequences ... when the 1943 US Strategic
Bombing program needed location of military & industrial targets
in Germany, it got them from wallstreet.

reference to June 1940 victory celebration held at Waldorf Astoria in
NYC, Intrepid loc1925-29:
One prominent figure at the German victory celebration was Torkild
Rieber, of Texaco, whose tankers eluded the British blockade. The
company had already been warned, at Roosevelt's instigation, about
violations of the Neutrality Law. But Rieber had set up an elaborate
scheme for shipping oil and petroleum products through neutral ports
in South America. With the Germans now preparing to turn the English
Channel into what Churchill thought would become "a river of blood,"
other industrialists were eager to learn from Texaco how to do more
business with Hitler.

Later the same year in Dec 1940, US corporations have a conference at
Waldorf Astoria in NYC, where they decide to launch a major propaganda
campaign because corporations had gotten such a bad reputation from
WW1 profiteering, crash of '29, the depression and supporting the
Nazis and Hitler.

the legacy of Seymour Cray

hancock4 writes:
To produce a report requiring complex selection and calculations, it
often helps to use COBOL to do the hairy stuff, and then a 4GL
(e.g. Easytrieve et al) to knock the report. The 4GL saves the burden
of hand coding columns, headers, footers, etc., indeed, if the basic
data has been assembled into a file, the 4GL program can be very
brief. Saves a lot of time.

When things were crashing, there were articles about it was all
because of faulty (risk management) math ... turns out misdirection
and obfuscation; risk managers were reporting that business executives
were forcing them to approve deals ... and a call to make risk
dept. immune from business executive pressure.

The next were reports of consultant hired by wallstreet that was
recommending that they tie up all the prominent economists, hire them,
put them on retainers, give large grants to univ. depts., etc. ... in
order to bias their public views

loc72-74:
"Only through having been caught so blatantly with their noses
in the troughs (e.g. the 2011 Academy Award -- winning documentary
Inside Job) has the American Economic Association finally been forced to
adopt an ethical code, and that code is weak and incomplete compared
with other disciplines."

loc957-62:
The AEA was pushed into action by a damning research report into the
systematic concealment of conflicts of interest by top financial
economists and by a letter from three hundred economists who urged the
association to come up with a code of ethics. Epstein and
Carrick-Hagenbarth (2010) have shown that many highly influential
financial economists in the US hold roles in the private financial
sector, from serving on boards to owning the respective
companies. Many of these have written on financial regulation in the
media or in scholarly papers. Very rarely have they disclosed their
affiliations to the financial industry in their writing or in their
testimony in front of Congress, thus concealing a potential conflict
of interest.

Securitized mortgages (CDOs) had been used during the S&L crisis to
obfuscate fraudulent mortgages (posterchild were office bldgs in
Dallas/Ft.Worth area that turned out to be empty lots. In the late 90s,
I was asked to look at improving the integrity of supporting documents
as countermeasures. Then loan originators were securitizing
loans&mortgages (CDOs) and paying for triple-A ratings (when both the
sellers and the rating agencies knew they weren't worth triple-A, from
Oct2008 congressional testimony). Triple-A rating trumps supporting
documentation and they can start doing no-documentation liar
loans. Being able to pay for triple-A eliminated any reason for loan
originators to care about borrowers' qualifications or loan quality,
they could sell off (all loans as fast as they could be made) to
customers restricted to dealing in "safe" investments (like large
pension funds, claim is it accounts for 30% loss in funds and trillions
shortfall for pensions), largely enabling being able to do over $27T
2001-2008
Evil Wall Street Exports Boomed With 'Fools' Born to Buy Debt
http://www.bloomberg.com/apps/news?pid=newsarchive&refer=home&sid=a0jln3.CSS6c

It wasn't good math, but in addition, it was based on the underlying
fabricated "triple-A" ratings

John Foster Dulles played major role in rebuilding Germany's economy and
military during 20s&30s. The Brothers: John Foster Dulles, Allen Dulles,
and Their Secret World War, loc865-68:
In mid-1931 a consortium of American banks, eager to
safeguard their investments in Germany, persuaded the German government
to accept a loan of nearly $500 million to prevent default. Foster was
their agent. His ties to the German government tightened after Hitler
took power at the beginning of 1933 and appointed Foster's old friend
Hjalmar Schacht as minister of economics.

loc873-79:
Sullivan & Cromwell floated the first American bonds issued by the
giant German steelmaker and arms manufacturer Krupp A.G., extended
I.G. Farben's global reach, and fought successfully to block Canada's
effort to restrict the export of steel to German arms makers.

loc905-7:
Foster was stunned by his brother's suggestion that Sullivan &
Cromwell quit Germany. Many of his clients with interests there,
including not just banks but corporations like Standard Oil and
General Electric, wished Sullivan & Cromwell to remain active
regardless of political conditions.

loc938-40:
At least one other senior partner at Sullivan & Cromwell, Eustace
Seligman, was equally disturbed. In October 1939, six weeks after the
Nazi invasion of Poland, he took the extraordinary step of sending
Foster a formal memorandum disavowing what his old friend was saying
about Nazism

From the law of unintended consequences ... when the 1943 US Strategic
Bombing program needed location of military & industrial targets in
Germany, it got them from wallstreet.

June1940, Germany had a victory celebration at the Waldorf-Astoria in
NYC with major corporations. Lots of them were there to hear how to do
business with the Nazi's, Intrepid: loc1901-4:
One prominent figure at the German victory celebration was Torkild
Rieber, of Texaco, whose tankers eluded the British blockade. The
company had already been warned, at Roosevelt's instigation, about
violations of the Neutrality Law. But Rieber had set up an elaborate
scheme for shipping oil and petroleum products through neutral ports
in South America. With the Germans now preparing to turn the English
Channel into what Churchill thought would become "river of blood,"
other industrialists were eager to learn from Texaco how to do more
business with Hitler.

Dec1940, 5,000 industrialists from across the nation have annual meeting
at Waldorf-Astoria in NYC. They had gotton such a bad reputation with
WW1 war profiteering, crash of '29, the depression, supporting Hitler
and Nazis, etc ... they decide to launch major propaganda campaign.

Congressional Madoff hearings had the person that tried unsuccessfully
for a decade to get SEC to do something about Madoff (SEC hands were
forced when Madoff turned himself in). At the time of the hearing, TV
news constantly tried to get him for interviews ... but he sent a
lawyer instead (something about being worried about possible
retribution from people behind influencing SEC to not do anything)

Note also, rhetoric in congress for Sarbanes-Oxley was that it would
guarantee that executives (and auditors) did jail time ... but it
required that SEC do something. Possibly because even GAO didn't
believe SEC was doing anything, it started doing reports of fraudulent
financial filings, even showing increases in numbers of fraudulent
filings after Sarbanes-Oxley goes into effect (and nobody doing jail
time).

A year after the Madoff congressional hearings, the person that had
tried unsuccessfully for a decade to get SEC to do something ... was
on book tour and interviewed. He said he had changed his mind (rather
than Madoff working with criminal elements that controlled SEC), he
decided that Madoff may have defrauded some criminal elements ... and
turned himself in seeking gov. protection (but then couldn't account
for why SEC wasn't doing anything).

I've seen references to some people felt so badly about what happened
to Andersen in the wake of Enron ... that provisions in SOX was to
give an enormous gift to the audit industry ... significantly
increasing the audit requirements that businesses would have to pay
for .... but there was never any intention to enforce any legal
action. Besides GAO reports of fraudulent financial filings continuing
& increasing after SOX goes into effect ... there is stuff like:
http://www.icij.org/project/luxembourg-leaks/big-4-audit-firms-play-big-role-offshore-murk

more trivia ... in late 90s, about the same time I was asked to look
at improving the integrity of securitized mortgage supporting
documents ... as countermeasure to use of securitizing obfuscating
fraud ... securitized mortgage (toxic CDOs) posts
http://www.garlic.com/~lynn/submisc.html#toxic.cdo

I was also asked into NSCC (before merger with DTC that formed DTCC)
to look at improving the integrity of exchange trading transactions. I
worked on it for awhile and then got another call to come in. They
said the work was being suspended because a side-effect of the
integrity work would have greatly increased transparency and
visibility ... which is antithetical to wallstreet culture
... including something about possibly 30% of transactions might need
to be disavowed also requiring plausible deniability. Decade later in
the Madoff congressional hearings, the person that had tried
unsuccessfully for a decade to get SEC to do something about Madoff,
also raises the issue of transparency and visibility. DTCC ref:
https://en.wikipedia.org/wiki/Depository_Trust_%26_Clearing_Corporationand regarding controversy over naked short selling

(External):Re: IBM

cfmpublic@NS.SYMPATICO.CA (Clark Morris) writes:
Actually allowing any country to review code is to open an exposure.
On the other hand all users have at least some need to verify that
code is not exposing them. For those users with high security needs
and a large enough budget, having all software in house maybe using
open source software as a starting base can make sense. I believed
back in the 1970s and 80s that one of the best places to put a spy was
in the IBM software creation and distribution system. These comments
apply to all countries. It would be interesting to find out which
countries and entities are reviewing source code from the various
vendors. I believe that Snowden supporters are naive if they believe
that other major and not so major countries are not engaged in much
the same activities as those he accused the United States NSA and
other agencies of committing. If IBM is allowing the Chinese
government to review the code, I will guarantee that other governments
are also reviewing the code. In addition we know that at least some
ISV's have access to at least some of the code under non-disclosure
agreements. I leave to you who are citizens of various countries to
determine how concerned you should be.

a lot of this is consequence of significant publicity in the past couple
years about US gov. agencies putting backdoors in many products from US
companies. In many parts of the world, US companies are now faced with
proving that their products don't have backdoors.

There is the folklore from the early 80s about certain gov. agency
asking IBM if it could guarantee that all the source IBM provided for
the POK favorite son operating system exactly corresponded to all the
code they were actually running. Supposedly a large taskforce spent
significant amount of money investigating the issue and concluded that
it wasn't practical (almost impossible to identify exactly all the
corresponding source that went with all the running porducts that a
customer had installed).

It use to be all source was available ... it was only in the 80s that
started having the OCO-wars ... with IBM moving to no longer making
source available.

charging for (application) software, SE services, maintenance, etc ...
however the company made the case that operating system software should
still be free.

Then "Future System" was started in the early 70s as countermeasure to
clone controllers (totally different from 370, with tightly integrated
controllers having exceedingly complex protocol). Internal politics
started killing off 370 products. Then the lack of 370 products during
this period is credited with giving clone processor makers a market
foothold. I continued to work on 360 & 370 stuff during this period,
even periodically ridiculing the FS efforts (not exactly career
enhancing activity). some past posts
http://www.garlic.com/~lynn/submain.html#futuresys

Then when "Future System" imploded, there was a mad rush to get
products back into the 370 pipeline. This contributed to selecting
several of the things that I had been doing for release to customers.
Part of the stuff was dynamic adaptive resource management (dating
back to when I was undergraduate in the 60s) was selected to be a
separate kernel component and the guinea pig for starting to charge
for operating system/kernel software (in large part because of the
rise of clone processors ... which was because of the lack of 370
products during the FS period) ... on the path to charging for all
software ... and then stopping making source code available. some past
posts
http://www.garlic.com/~lynn/subtopic.html#fairshare

The former president of AMEX then leaves IBM and becomes head of
another major private equity company ... which then does a private
equity LBO of the company that employs Snowden. Last decade there is
enormous uptic of outsourcing to for-profit companies ... especially
those under the thumb of private-equity owners with enormous political
clout. Majority of the intelligence budget and over half the people
are now with for-profit companies.
http://www.investingdaily.com/17693/spies-like-us/
Private contractors like Booz Allen now reportedly garner 70 percent of
the annual $80 billion intelligence budget and supply more than half of
the available manpower. They're not going away any time soon unless the
CIA and NSA want to start over and with some off-the-shelf laptops,
networked by the Geek Squad from Best Buy. Security clearances used to
be a government function too, but are now a profit center for various
private-equity subsidiaries.

... snip ...

comparison of private-equity LBOs to house flipping, except the loan
for the purchase goes on the victim company's books ... and stays
with it even after flipping. They can even sell a victim company for
less than they paid and still walk away with boat loads of
money. Victim companies are under intense pressure to make money to
service the debt and they account for over half corporate defaults
(the companies being paid for security clearances were found to be
doing the paperwork ... but not actually doing the security checking).
http://www.nytimes.com/2009/10/05/business/economy/05simmons.html?_r=0

There has been a lot written about the failure of gov. whistleblower
provisions ... rather than protecting the whistleblowers ... it sets
them up for prosecution. The Success Of Failure scenario has
long time senior people reporting problems to the responsible group in
congress ... and then getting charged under the same statute that was
used to charge Snowden. The whistleblower provisions are for employees
only (not for the exploding number of contractors like Snowden) ...
and it didn't protect them either. Other trivia, in the wake of
the Success Of Failure scandal, congress put the agency on
probation and not allowed to manage its own projects (however, that
may have just been a ploy to further outsource to for-profit
companies). some past posts
http://www.garlic.com/~lynn/submisc.html#whistleblower

More trivia, IC-ARDA (since renamed IARPA) released an unclassified
BAA around the first of the century ... that basically said that none
of the tools they had did what was needed ... which turns out to
correspond to a lot of what was exposed in the later Success Of
Failure scandal.

loc1577-79:
The Iraqi army's cohesion in the face of a determined offensive by a
small force of irregulars can be measured in hours. When a few hundred
Islamic State of Iraq and Syria (ISIS) militiamen attacked Mosul, for
example, the 30,000-man Iraqi army garrison there fled, shedding their
uniforms and equipment as they ran.6

loc1693-95:
The ARVN that the United States created collapsed in 6 weeks, and the
Iraqi Army that the United States created collapsed in 6 hours because
they had neither a national sense of country nor a government--in
Saigon or Baghdad--that its soldiers believed was worth dying for.

loc2127-29:
After 13 years and a trillion dollars spent,22 security is so bad in
the capital city that the flag-furling ceremony for Operation ENDURING
FREEDOM had to be held in secret at an undisclosed location in Kabul
out of concern that the ceremony would be attacked by the Taliban.23

loc2319-21:
In 1993, McNamara addressed the Council on Foreign Relations in New
York. As Bruce Nussbaum notes, McNamara told the audience: he had made
a mistake. The protesters had been right all along. The war was
unwinnable from the start. The domino theory was ridiculous.
Nationalism had been confused with communism. There had never been a
serious threat to U.S. security.75

loc2372-75:
The most outspoken critics of America's military, like Lieutenant
General Herbert McMaster,85 retired Colonel Andrew Bacevich,86 and
former Marine Lieutenant Colonel Frank Hoffman, have criticized the
military establishment, or the officer corps, for not standing up to
civilian leaders, for being too willing to try to get the job done, or
for being, in Hoffman's harsh words, "yes men."

... snip ...

As aside, McNamara was LeMay's staff planning and analyzing the WW2
fire bombing of German and Japanese cities. In the 2003 documentary
The Fog of War, McNamara recalled the firebombing of Tokyo on March 9,
1945: In that single night, we burned to death a hundred thousand
Japanese civilians in Tokyo--men, women, and children. After the war,
General LeMay said to McNamara: If we'd lost the war we'd all have
been prosecuted as war criminals.

my son-in-law 1st tour was Fallujah 2004-2005 then 2nd tour was
Baqubah 2007-2008, described as worse than Fallujah, in part because
the enemy made use of what they had learned in Fallujah
(administration was claiming that things had got a lot better so
Baqubah didn't get the coverage that Fallujah got)
http://www.amazon.com/Battle-Baqubah-Killing-Our-Way-ebook/dp/B007VBBS9I/

loc5243-54:
I was overwhelmed at the amount of destruction that surrounded me. The
sterile yard was about 150 meters wide by about 100 meters deep, and
it was packed full of destroyed vehicles (words can't describe what I
saw)

... and
I saw other Bradleys and M1 Abrams main battle tanks, the pride of the
1st Cavalry Division--vehicles that, if back at Fort Hood, would be
parked meticulously on line, tarps tied tight, gun barrels lined up,
track line spotless, not so much as a drop of oil on the white
cement. What I saw that day was row after row of mangled tan steel as
if in a junkyard that belonged to Satan himself.

... snip ...

Saddam learned from Iraq1 not to be easy targets for US air power
... just melt away. Then from the law of unintended consequences, for
Iraq2, they were told to bypass ammo dumps looking for WMDs ... when
they get around to going back, more than a million metric tons have
evaporated. They then start seeing large artillery shell IEDs, even
taking out Abrams
http://www.amazon.com/Fiasco-American-Military-Adventure-ebook/dp/B004IATD6U/

The Strategic Lessons Unlearned from Vietnam, Iraq, and Afghanistan:
Why the ANSF Will Not Hold, and the Implications for the U.S. Army in
Afghanistan
https://www.strategicstudiesinstitute.army.mil/pubs/display.cfm?pubID=1269

loc1577-79:
The Iraqi army's cohesion in the face of a determined
offensive by a small force of irregulars can be measured in
hours. When a few hundred Islamic State of Iraq and Syria (ISIS)
militiamen attacked Mosul, for example, the 30,000-man Iraqi army
garrison there fled, shedding their uniforms and equipment as they
ran.6

loc1693-95:
The ARVN that the United States created collapsed in 6
weeks, and the Iraqi Army that the United States created collapsed in
6 hours because they had neither a national sense of country nor a
government--in Saigon or Baghdad--that its soldiers believed was worth
dying for.

high level language idea

hancock4 writes:
I think better compilers didn't become widely available until the later 1960s.

I don't know the quality of IBM's first S/360 COBOL and FORTRAN
compilers. But I speculate it may have been poor since IBM was under
tremendous pressure to get that stuff out the door, and, early
machines tended to be small. At our 360-40 site, the free D COBOL was
poor; we used purchased F COBOL.

I don't know about Fortran, but in college they used the WATFIV
product, supposedly for better diagnostics. I don't know what Fortran
a researcher would use who had serious number crunching to do and
machine time was a consideration.

The person that did the 370/145 APL microcode assist at the Palo Alto
Science Center also did internal FortranQ ... which then released as
enhancement to Fortran HX. This has section that discusses some
differences between FORT-H (extended) and FORT-H (aka fortran q,
enhanced) .... also example application count for Fort-G, Fort-H
(extended), and Fort-H (enhnaced)
http://www.cs.rice.edu/~keith/512/2011/Lectures/L02FortranH-1up.pdf

H&HX did optimization ... see reference comparing extended & enhanced
optimization differences ... that reference also has some discussion
about improvement in optimization becoming as good as human assembler
coding ...

Marine distress system

I was sitting at coffee shop yesterday morning, the next table had
four men and one was explaining the marine distress system. A ship
signals distress and all ships in the world receive it ... and
receivers have limited time interval to respond (cancel or
acknowledge), before their system retransmits the signal. He claimed
that a sinking ship distress transmission near australia a month ago,
is still being retransmitted. He said that it is an ancient MS/DOS
based system and almost impossible to change. He said that some ships
turn off their receiver because there is also required paper work that
has to be filled out for each signal that is received.

high level language idea

Peter Flass <peter_flass@yahoo.com> writes:
it was all open-source. No need to reverse-engineer anything. As late as
MVS/XA I had a VSAM open exit where the call location had to be gleaned by
looking at the fiche and disassembling the PL/S module, although it started
out as a straightforward source update in MVS/370.

I think JES2, or most of it, is still distributed as source, because so
many users had so many different mods. Out JES guy had tons and knew the
jes code i side and out.

however, the company manage to make the case that kernel (operating
system) software should still be free. Somewhat because of the
clone processor makers get market foothold during the FS period
(because of lack of 370 products) ... some past FS posts
http://www.garlic.com/~lynn/submain.html#futuresys

part of JES2 folklore is that they did all their source maintenance
using vm370/cms (originally cp67/cms) multi-level update infrastructure
... and then had to go thru conversion to the MVS-based system for
release. misc. past posts mentioning HASP, JES2, NJE, etc
http://www.garlic.com/~lynn/submain.html#hasp

high level language idea

hancock4 writes:
Keeping PTFs applied and up to date was a busy task for system
programmers at a large installation. They had to be reviewed to see
if the contents were relevant to the particular installation.
Sometimes they had to be loaded when the machine was idle and then
re-IPL'd. They had to be carefully tracked and loaded in the proper
order.

also regression tests ... PTFs could introduce failures &/or
incompatibilities ... like "fixes" for JCL that had side-effect that
would result in production jobs to stop running.

--
virtualization experience starting Jan1968, online at home since Mar1970

high level language idea

Anne & Lynn Wheeler <lynn@garlic.com> writes:
also regression tests ... PTFs could introduce failures &/or
incompatibilities ... like "fixes" for JCL that had side-effect that
would result in production jobs to stop running.

another side-effect ... over time, PTFs could significantly degrade the
performance of my system.

When I did sysgen, I tore apart sysgen2 output of sysgen1 and
re-organized lots of the sequence to carefully place files and PDS
library members on disk to optimize arm seek motion (& PDS multi-track
search member lookup) ... which got nearly 3 times throughput
improvement on fortgclg student jobs.

PTFs would typically "replace" one or more PDS library members.
"Replace" is something of misnomer ... it would insert new member at
the end of the library and null out the member being replaced
... destroying careful physical ordering on disk. After six months of
PTFs, my carefully generated system could loose half or more of the
performance optimization (enough PTFs might also use up max. space
that had been pre-allocated for PDS library file).

old post with part of 60s SHARE presentation I made on system
optimization that I did as undergraduate. Most of it is about
significant rewrites of (virtual machine) cp67 system code to
cut simulation pathlengths ... benchmarking os/360 fortgclg
in virtual machine. Part of the post also discusses the
optimizations that I did for os/360.
http://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14

started with re-arrainging cards in a card tray ... stage2 was over
thousand cards ... mostly based on my understanding of os/360
internals. then got a load trace ... for PDS library member names
... and reorganizing cards got more sophisticated.

a decade later ... at SJR ... did modification of VM370 to collect arm
movement data ... both VM370 operations and activity of virtual
machines. It was initially used in cache simulator ... looking at types
of caching and caching strategies. One of the things it showed that most
efficient use of fixed amount of electronic cache was global cache
rather than partitioned into controller or device level caches.

The work also found that emerging cache technology started to take care
of highest use common system data ... and physical location clustering
started to focus on collections of application data that tended to be
used together in bursty sequences ... as in daily, weekly, monthly
intervals. Also highlighted differentiating between high re-use data
patterns and purely sequential use data patterns where caching provided
little benefit (and caching sequential use data could degrade throughput
by replacing repeated high-use data).

There was work on doing efficient real-time reduction of the data ...
so it might be used by systems for allocation strategies as well as
supporting dynamic moving data for better throughput.

about the same time there was a different internally developed program
MDREORG that relied on standard VM370 PER collected information. It was
used by (internal) installations moving from 3330&3350 to 3380
configurations. It would define load balancing across 3380 drives and
ordering within 3380 drives to optimize throughput. One of the things it
showed, if you completely filled a fewer number of 3380 drives from 3350
drive configuration, there would be worse performance. The throughput
break-even was filling 3380 drive 80% full of data from 3350
configuration (which was still cheaper than 3350 configuration). For
better performance needed to have 3380 at less than 80% full of data
from 3350 configuration (and leaving the rest of the space empty or
using for very infrequently used data).

semi-related, ... late 70s & early 80s, I was making comments that
relative system throughput of disks was significantly declining ... by
the early 80s, the relative system throughput of disks were an order of
magnitude less than the mid/late 60s (when I was doing the careful
os/360 sysgens) ... aka processors got 40-50 times faster, disks got 3-5
times faster.

Relational Databases Lack Relationships

Eric <eric@deptj.eu> writes:
Aside from needing to find out what on earth they mean by
"semi-structured" and "ad-hoc, exceptional relationships", has anyone
ever heard, from any other source, that codifying paper forms and tabular
structures is what relational databases were designed to do?

I got roped into doing doing part of the implementation for original
sql/relational ... Codd's office was floor above mine.

they were somewhat competing with IMS for efficient financial
transactions (as early adopter) and table optimization allowed single
record to be fetched using account number index, that contained all the
fields for performing financial transactions.

The IMS group still criticized the implementation as requiring twice the
disk space (for the table index) as IMS with its direct record pointers
... and 4-5 the disk I/Os (doing index lookup). The counter was that IMS
required significant manual maintenance to go through and update all the
exposed record pointers anytime there was even trivial, minor change in
layout. To some extent the trade-offs inverted in the 80s when cost of
disk space enormously dropped and size of memories significantly
increased ... allowing significant amount of index caching (reducing
physical disk i/os). One of the original customer joint studies/pilots
was with large financial institution.

The next generation "official" DBMS was called EAGLE. With the
corporation preoccupied with EAGLE, we were able to do technology
transfer to Endicott and get it released as SQL/DS. Later when EAGLE
implodes, they ask how long could it would take to make it availble
... which is eventually released as DB2.

At the same time, I got sucked into doing another implementation that
directly instantiated all relationships (done fpr internal chip design
tools group) ... every field effectively became separate indexed record
and every relationship became separate indexed record. It could require
7-10 times the disk space as the original sql/implementation and
significant more disk i/os (access index for a field, then read the
field, then access the indexes for all its possible relationships, read
the relationships, and then access all the indexes for each field for
each relationship and then read all those fields.)

For really large amounts of complex data ... there was cross-over where
table joins became more expensive/overhead than directly instantiating
all fields and relationships (with forest of indexes)

Are we just running in place?

Quadibloc <jsavard@ecn.ab.ca> writes:
Well, heck. It was written by IBM. They had the source and the internal
documentation for their System/360 operating systems at their elbows. They knew
how this stuff was done, and they tripped over the problems before Bill Gates
or Richard Stallman, for that matter, were out of knee pants.

So it would be highly surprising were anything else the case.

there was the joke that some of the os/360 kingston MFT people went to
Boca and attempted to re-implement MFT for the Series/1 ... released
RPS ... big, kludgy, overweight. Folklore is that EDX was originally
done by some physics summer interns at IBM San Jose Research.

OS/2 was lighter weight ... but by that time the machines were larger
and more powerful than peachtree (used in series/1).

In 60s as undergraduate, I had done dynamic adaptive resource management
for (virtual machine) cp67 ... which was picked up and shipped in
standard product. In the (simplification) morph from cp67 to vm370
... almost all of my changes were dropped (as well as other stuff).

charlie had invented compare&swap (name chosen because "CAS" are his
initials) instruction when he was doing fine-grain multiprocessing cp67
locking. the attempt to include compare&swap instruction in 370 was
initially rebuffed (the POK favorite son operating system people said
that test&set instruction was more than satisfactory). The 370
architecture owners said that to justify compare&swap for 370, we would
have to come up with uses other than kernel multiprocessor support. Thus
was born the high-performance application multithreaded
(multiprogramming) uses (whether running on multiprocessor or not),
examples continue to appear in appendix of principles of operation (so
useful, compare&swap or similar semantics started to appear in other
platforms for high-performance DBMS multithreading).

with the failure of FS, there was mad rush to get stuff back into the
370 product pipeline ... which contributed to including a lot of stuff I
had been working on in customer releases. Some of it went into the next
"free" vm370 release ("3"). A bunch of stuff was decided to be the
guinea pig for starting to charge for kernel software ... a lot of
resource management
http://www.garlic.com/~lynn/subtopic.html#fairshareand bunch of paging infrastructure
http://www.garlic.com/~lynn/subtopic.html#clock

as the "Resource Manager". However, it included a bunch of stuff done
for structuring kernel for multiprocessor support (w/o actual turning on
multiprocessor support).

Then for vm370 release 4, it was decided to ship multiprocessor support.
At the time, the guidelines for kernel software charging was that "old"
software was still free, and software directly in hardware support had
to be free (charged-for kernel software was "add-on"). The problem was
that nearly 90% of the lines-of-code in the charged-for resource manager
was required for multiprocessor support. For vm370 release 4
multiprocessor support, a little slight of hand magically moved nearly
90% of the resource manager into the "free" software base ... while not
changing the price of the charged-for resource manager.

"Osmium" <r124c4u102@comcast.net> writes:
I don't have the slightest reason to believe the problem has been
*fixed*. There is one less name on Wall Street, the one allowed to
fail. Other than that, the same cast of characters who caused the
recent problems, instead of being in prison, are left free to organize
new, and even more complex, methods that will likely result in a
second failure. Too big to fail is still true. Of course its only
been, what, seven years, since the failure?

A big component of the '29 failure was banks investing loose money,
that wasn't theirs, in the stock market. Fortunately, those banks
failed and disappeared. In the recent fiasco, they [1] are still
there and dispensing huge bonuses to undeserving people. The
undeserving people are those who can make the right guesses about the
future a few times in a row and have the right contacts.

from above:
Against this backdrop, Lehman turned to Repo 105 transactions to
temporarily remove $50 billion of assets from its balance sheet at
first and second quarter ends in 2008 so that it could report
significantly lower net leverage numbers than reality.

... snip ...

Four largest TBTF "banks" still carrying $5.2T "off-book" end of 2008
... over 100 times that of Lehman ... allowed to play by different
rules
Bank's Hidden Junk Menaces $1 Trillion Purge
http://www.bloomberg.com/apps/news?pid=newsarchive&sid=akv_p6LBNIdw&refer=home

Later a son, presides over the economic mess, 70 times larger than S&L
crisis. In the S&L crisis there were 30,000 criminal referrals and
1,000 criminal convictions ... there have been *ZERO* criminal
referrals/convictions for the economic mess (proportionally, there
should have been 70,000 criminal convictions). If the economic mess had
been limited to 1/100th as big, it would have only been slightly over
half the problem of the S&L crisis.

Securitized mortgages had been used during the S&L crisis to obfuscate
fraudulent mortgages (posterchild were office bldgs in Dallas/Ft.Worth
area that turned out to be empty lots). In the late 90s, I was asked to
look at improving the integrity of supporting documents as
countermeasures. Then loan originators were securitizing loans&mortgages
and paying for triple-A ratings (when both the sellers and the rating
agencies knew they weren't worth triple-A, from Oct2008 congressional
testimony). Triple-A rating trumps supporting documentation and they can
start doing no-documentation liar loans. Being able to pay for triple-A
eliminated any reason for loan originators to care about borrowers'
qualifications or loan quality, they could sell off (all loans as fast
as they could be made) to customers restricted to dealing in "safe"
investments (like large pension funds, claim is it accounts for 30% loss
in funds and trillions shortfall for pensions), largely enabling being
able to do over $27T 2001-2008
Evil Wall Street Exports Boomed With 'Fools' Born to Buy Debt
http://www.bloomberg.com/apps/news?pid=newsarchive&refer=home&sid=a0jln3.CSS6c

If that wasn't enough, paying for triple-A ratings enabled them to
create securitized mortgages designed to fail, pay for the triple-A
rating and sell off to their customer/victims and then take out CDS
gambling bets that they would fail (creating enormous demand for dodgy
mortgages). Later the largest holder of the CDS gambling bets was AIG
... who was negotiating to payoff at 50-60 cents on the dollar when the
sec. of treasury steps in, forces them to sign a document that they
can't sue those making the CDS gambling bets and to take TARP funds to
payoff at face value (AIG is the largest recipient of TARP funds and the
firm formally headed by the sec. of treasury is the largest recipient of
face-value payoffs).

Disclaimer: Jan2009 I was asked to HTML'ize the Pecora hearings (30s
senate hearings into crash of '29) with lots of internal hrefs and URLs
between what happened then and what happened this time (comments that
the new congress might have appetite to do something). I work on it for
awhile and then get a call that it won't be needed after all (reference
to enormous piles of wallstreet money totally burying DC). Loans and
mortgages were being laundered through wallstreet as triple-A rated,
securitized instruments largely using real estate market for the
speculation bubble scam. The crash of '29 used the stock market for the
speculation bubble scam (from Pecora testimony):
BROKERS' LOANS AND INDUSTRIAL DEPRESSION

For the purpose of making it perfectly clear that the present
industrial depression was due to the inflation of credit on brokers'
loans, as obtained from the Bureau of Research of the Federal Reserve
Board, the figures show that the inflation of credit for speculative
purposes on stock exchanges were responsible directly for a rise in
the average of quotations of the stocks from sixty in 1922 to 225 in
1929 to 35 in 1932 and that the change in the value of such Stocks
listed on the New York Stock Exchange went through the same identical
changes in almost identical percentages.

While the TARP funds was facade that they were to be used to buy
off-book toxic assets ... with only $700B, they could have barely made
a dent in the problem

The FEDs fought long, hard, legal battle to prevent public disclosure of
what they were doing behind the scenes ... tens of trillions in ZIRP
funds (claims that TBTF have been making $300B/annum off ZIRP funds) and
buying trillions in offbook toxic assets at 98cents on the dollar. After
loosing the legal battle, Bernanke has press conference where he said
that he had expected the TBTF to use the ZIRP funds to aid mainstreet,
but when they didn't he had no way to force them (but that didn't stop
the ZIRP funds). Note the Fed chairman was supposedly selected in part
because he was depression scholar. However, the Fed had tried something
similar after the crash of 29 and wall street had behaved the same way
... so there should have been *NO* expectation that they would behave
differently this time.

By comparison, TBTF have been fined a total of $300B in "deferred
prosecution" since the economic mess (not limited to the economic mess
illegal activity, but also manipulating LIBOR, Forex, and other
commodities, money laundering for drug cartels and terrorists,
ROBO-signing mills for fabricating documents for illegal foreclosures,
etc). The joke is that the $300B in fines is so small compared to the
illegal profits that it is just viewed as cost of TBTF doing illegal
business (especially with no threat of jail time).

rationality

Morten Reistad <first@last.name.invalid> writes:
It was the Glass-Steagall act (really just 4 provisions in the
Banking act of 1933) (GS1933) that at least plugged the multiplier
of the crash that had made the 1929 crash som bad. This worked
until the Gramm-Leach-Bliley of 1999 (GLB1999) removed the two crucial
provisions of the GS1933. Signed by William Jefferson Clinton.

rhetoric on the floor of congress (initially) was that the purpose of
GLBA was to prevent new banking competition (especially competitors with
new technology that would make banking significantly more efficient and
competitive) ... if you already have banking charter, you get to keep
it, if you don't already have a banking charter you can't get one. They
then start adding other favors ... repeal of Glass-Steagall and
"opt-out" personal information sharing. Folklore is that president
initially was going to veto GLBA ... which passes pretty much along
party lines (54-44), then they go back and bring the other party on
board and it passes with veto proof 90-8 and the president signs it. It
was supposedly explained to him that wallstreet had lobbied the majority
party in congress to the tune of $125M, but had also lobbied his party
in congress to the tune of $115M (and congress "owed" it to wallstreet).
https://en.wikipedia.org/wiki/Gramm%E2%80%93Leach%E2%80%93Bliley_Act

President of AMEX is in competition to be next CEO and wins. The looser
takes their protege and leaves, going to Baltimore and takes over what
is described as a loan sharking business, They then make several other
acquisitions, eventually acquiring CITIBANK in violation of
Glass-Steagall, Greenspan gives them an exemption while they lobby
congress for repeal of Glass-Steagall (enabling too big to fail). They
enlist several others ... including Gramm, #2 on times list responsible
for the economic mess
http://content.time.com/time/specials/packages/article/0,28804,1877351_1877350_1877330,00.html

and secretary of treasury (and former chairman of Goldman Sachs). Once
GLBA is on its way to final vote, the secretary of treasury resigns and
joins Citigroup as what was described at the time as co-CEO. The
protege leaves and becomes chairman of JPMorgan/Chase

Gramm and his wife also responsible for CFTC modernization act
preventing regulation of over the counter derivatives ... originally
described as gift to ENRON

The head of CFTC proposes regulating derivatives and is quickly replaced
with Gramm's wife while he gets provision added that prevents the
regulation. She then resigns and joins ENRON board and on the financial
audit committee. ... some past posts
http://www.garlic.com/~lynn/submisc.html#enron

Securitized mortgages had been used during the S&L crisis to obfuscate
fraudulent mortgages (posterchild were office bldgs in Dallas/Ft.Worth
area that turned out to be empty lots). In the late 90s, I was asked to
look at improving the integrity of supporting documents as
countermeasures. Then loan originators were securitizing loans&mortgages
and paying for triple-A ratings (when both the sellers and the rating
agencies knew they weren't worth triple-A, from Oct2008 congressional
testimony). Triple-A rating trumps supporting documentation and they can
start doing no-documentation liar loans. Being able to pay for triple-A
eliminated any reason for loan originators to care about borrowers'
qualifications or loan quality, they could sell off (all loans as fast
as they could be made) to customers restricted to dealing in "safe"
investments (like large pension funds, claim is it accounts for 30% loss
in funds and trillions shortfall for pensions), largely enabling being
able to do over $27T 2001-2008
Evil Wall Street Exports Boomed With 'Fools' Born to Buy Debt
http://www.bloomberg.com/apps/news?pid=newsarchive&refer=home&sid=a0jln3.CSS6c

If that wasn't enough, they also start packaging securitized
loans/mortgages designed to fail, pay for triple-A, sell to their victim
customers, and then take out (over the counter derivative) CDS gambling
bets they would fail (creating enormous demand for dodgy loans &
mortgages). Largest holder of CDS gambling bets was AIG and was
negotiating to pay off at 50-60cents on the dollar, when the secretary
of treasury (also a former chairman of Goldman Sachs) steps in and
forces them to sign a document that they can't sue those making the
gambling bets and to take TARP funds to payoff at 100cents on the
dollar. The largest recipient of TARP funds was AIG and the largest
recipient of face-value payoffs was TBTF formerly headed by
sec-of-treasury.

In the early 90s (about the time IBM is going into the red), AMEX does
spin-off of much of its dataprocessing (mostly mainframes) in what was
the largest IPO up until that time, as First Data. 15yrs later KKR does
a private-equity take-over of First Data in what was described as the
largest LBO up until that time (or 2nd largest, depending on how you
date the take-over). Recently KKR does (re-)IPO of First Data.

Private-equity take-overs have been described similar to house flipping,
except the loan to buy the company goes on the company's books and goes
with the company when it is sold (it is even possible to sell a company
for less than they paid and still walk away with boat loads of money).
Also, the companies are under intense pressure to service the debt load
and over half of "corporate defaults have been companies either owned at
one time or still owned by private equity firms"
http://www.nytimes.com/2009/10/05/business/economy/05simmons.html?_r=0

The company has slowly expanding revenues, and steep debt. At current
tip, the company has an unadjusted total load of $21.16 billion. First
Data will be using the $2.56 billion proceeds from the initial public
offering to pay down some of its long-term debt.

... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

rationality

jmfbahciv <See.above@aol.com> writes:
I think the Fed has stopped printing money so the reaction to the
problem they were trying to "stop" will happen in the next few years.

FED is still providing tens of trillions in ZIRP funds. Lots of
references that periodic comments by FED chairman that they will stop
ZIRP funds, is just HYPE ... claim that FED has backed itself into
corner with nearly decade of ZIRP funds and doesn't know what to do
now.

--
virtualization experience starting Jan1968, online at home since Mar1970

rationality

Andrew Swallow <am.swallow@btinternet.com> writes:
Who gets to pay zero rate interest?
Not me. Credit cards still charge interest. Mortgages still charge
interest. Business loans still charge interest (if the company can get
one). Government bonds pay you a tiny amount of interest (near
zero). So I suspect that only the banks get the zero interest rate.

Bernanke fought hard battle in the courts to prevent disclosure what he
was doing behind the scenes with the tens of trillions in ZIRP
funds. After the FED was finally forced to make public what they were
doing, Bernanke told the press that he thought that the TBTF would use
the ZIRP funds to lend to mainstreet, but when they didn't, he had no
way to force them (but he also didn't stop the ZIRP funds)

Note supposedly one of the reasons Bernanke was chosen as new chair of
the FED was because he was a depression era scholar. However, the FED
had tried something similar in the 30s and wallstreet had reacted
similarly ... so there was no reason that Bernanke should have expected
them to do anything different this time.

Four largest TBTF "banks" still carrying $5.2T "off-book" end of 2008
... over 100 times that of Lehman ... allowed to play by different rules
Bank's Hidden Junk Menaces $1 Trillion Purge
http://www.bloomberg.com/apps/news?pid=newsarchive&sid=akv_p6LBNIdw&refer=home

Repeal of Glass-Steagall allowed regulated depository institutions to
combine with other institutions where they winked at extremely risky &
questionable behavior. That significantly increased the size of
institutions as well as put the depository institutions at risk of
behavior of these other operations.

The result was not only the rise of too big to fail ... but
eventually expanded into too big to prosecute and too big to
jail. They effectively felt they could engage of all sorts of
illegal activity with impunity ... with only penalty is relatively
small fines and "deferred prosecution" ... so far a total of $300B in
fines which is relatively minor compared to the size of their illegal
profits.

Trivia: The original purpose of GLBA (now better known for repeal of
Glass-Steagall) was to prevent other institutions from getting
banking charters. As part of the FED ZIRP fund bailout, they were
giving banking charters to some of the large risky investment
institutions (which should have been violation of GLBA) that didn't
already have one ... making them eligigble for ZIRP funds.

then tax revenue was cut by $6T and spending increased by $6T for $12T
budget gap (compared to PAYGO financial responsible baseline
budget). By the middle of last decade, comptroller general was
including in speeches that nobody in congress was capable of middle
school arithmetic for how badly they were savaging the budget.

Joke is that congress is the most corrupt institution on earth,
selling trillions in tax loopholes for billions. Debt now has exploded
to over $18T ... with nearly half trillion in debt payments ... most
of that is TBTF using ZIRP funds (from federal reserve) getting around
$300B/annum; $2.8T from the SS Trust Fund, over a trillion each to
China and Japan ... and a few others. Eliminating many of the tax
loopholes from after PAYGO expires gets trillion/annum to balance the
budget (but then congress is out the billions in graft and
corruption). However they need additional trillion/year in tax
revenue, half trillion for the debt payments and paying off the debt
at half trillion/annum (would take 36years).

Baby boomer (bubble) generation is four times as large as the previous
generation and twice as large as the following generation. As long as
baby boomers are working, there was more money going into the SS Trust
Fund than being paid out in benefits (building reserve for baby
boomers' retirement). However, congress has been "borrowing" the money
in SS Trust Fund to pay for other things. As baby boomers retire
... the amount paid into the SS Trust Fund will significantly drop and
benefits significantly increase. This will require that congress raise
taxes on the following generation to pay back the amount that has been
looted from the trust fund ... in order to pay back the missing money
(currently $2.8T) from the trust fund. The betting is that congress
will radically reduce or eliminate SS benefits instead ... so that the
missing money won't have to be replaced (via increased taxes on the
following generation, increasingly being spun as old people taking
from the young ... when it is actually having the young replace the
congressional financial manipulation).

Congress has also been looting infrastructure maintenance & upkeep to
pay for other things .... claims another $2T in deferred
infrastructure spending ... which will also require increasing taxes
on the following generation to cover that deficit.

There is $1T/annum short fall via tax loopholes after PAYGO
expires. There is also nearly $1T/annum to clear the deficit mostly
created after PAYGO expires. There is nearly another $1T/annum to
cover things like SS Trust Fund, deferred infrastructure spending, and
other things. That is nearly $3T/annum in taxes just to fix the
congressional financial manipulation that has been going on. The 2014
federal budget was $3T, so that requires $6T in taxes, half to cover
balanced federal spending and half to cover past congressional
financial manipulation.

Trivia: The congressional party in power during the 90s played major
role in PAYGO and balanced budget. Last decade it was a completely
different party, however with the same name that allowed PAYGO to
expire, responsible for the enormous graft & corruption in tax
loopholes and huge deficit.
https://en.wikipedia.org/wiki/PAYGO

The party with that name was also responsible for Medicare Part-D
(billed as enormous gift to drug industry)... the first major
legislation after PAYGO is allowed to expire. CBS 60mins does expose
of the 18 members of the party (staffers and member of congress
responsible for getting it passed). After it passes, all 18 have
resigned and on drug industry payroll.

Middle of last decade, the comptroller general was including in
speeches that nobody in congress was capable of middle school
arithmetic (for how badly they were savaging the budget). He was also
claiming that part-d comes to be a long-term $40T item, totally
swamping all other budget items.

Since turn of century, local Washington DC news will periodically
refer to congress as Kabuki Theater (term will sometimes even leak
into national news) ... that what is seen publicly has little to do
with what is really going on (apparent conflict between the parties is
distraction for the public) ... speculation is whether there is
actually more than or or two honest members in congress.

rationality

"Osmium" <r124c4u102@comcast.net> writes:
I would expect to hear about little old ladies, widows, who expected
to live on interest for the rest of their lives but can't. I don't see
those stories. Of course they never could, even in the past, if you
factor in inflation and taxes. But they *thought* they could. Which
was, kind of, the point, or at least *a* point

2016, social security benefits may go up 1% or less, because there is
"no inflation". However some retirees supposedly getting their monthly
medicare premium (automatically dedicated from social security) go up by
as much as three times. rents in various parts of the country has been
increasing 10-15%. Various claims that gov. is cooking the inflation
books

... and ...
There is so much wrong with the BLS data, I don't know where to
start. The rental market has been on fire since 2012. Builders are
erecting apartments at a breakneck pace. Independent, non-captured,
neutral real estate organizations show rents surging to all time highs,
growing by 5.1% on an annual basis. Real rents in the real world have
grown by 14% since 2012. The BLS says they've grown by 9%. Who do you
believe?

... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

rationality

Bloomberg just broadcast interview with (nobel winner) Stiglitz saying
ZIRP funds haven't helped the economy ... that the real problem is that
workers wages/income isn't going up ... so people don't have the money
to spend ... and w/o public spending money the economy is were it is

had been protege of person competing for next CEO of amex ... when that
person looses to the president of amex, they go to baltimore taking over
what is described as loan sharking business. they then make some number
of other acquisititions, eventually acquiring citibank in violoation of
glass-steagall. Greenspan gives them an exemption while they lobby
congress for repeal.

rationality

Anne & Lynn Wheeler <lynn@garlic.com> writes:
Bloomberg just broadcast interview with (nobel winner) Stiglitz saying
ZIRP funds haven't helped the economy ... that the real problem is that
workers wages/income isn't going up ... so people don't have the money
to spend ... and w/o public spending money the economy is were it is

rationality

"Osmium" <r124c4u102@comcast.net> writes:
So we are seven years into this experiment to see if zero interest
rates can get us out of a recession. So we are in about 1936 in
comparison to the depression of 1929. That one eased up in 1940 or so
when Hitler started mixing it up in Europe.

Google says credit card rates are almost 18% which seems unnecessarily
high to me. The Federal Reserve Board is probably using the only
tools they have to inject money into the economy. But I wonder if this
is the best way. As I understand it the FRB it is a little
(subjectively) government in and of itself and *they* have decided
there is a problem, what the problem is, and how to fix it.

I wonder what plan B is?

as an aside ... besides the other TBTF illegal activities, fabricating
CDOs with triple-A ratings, CDOs with triple-A ratings designed to fail
for fixed CDS gambling bets, robo-signing mills for illegal
foreclosures, manipulating LIBOR, Forex, commodity markets, money
laundering for drug cartels and terrorists, aiding the 1% with
offshoring and tax evasion, etc ... they are also be found using some of
their ZIRP funds backing payday lender fronts ... getting equivalent of
hundreds of percent interest rate.

rationality

Peter Flass <peter_flass@yahoo.com> writes:
I would agree, but most people who know say that the current system is a
delicate balance between what people pay in and what they get out. If you
raise the income limit and don't raise the top benefit people will see the
system less as an insurance system (which is the way it was sold
originally) and more of a welfare system. This would decrease the support
for SS and lead to other programs down the road.

At least raising the retirement age has some logic that could be used to
sell it, since fewer people are dying in their 60s and more in their 80s
these days.

Baby boomer (bubble) generation is four times as large as the previous
generation and twice as large as the following generation. As long as
baby boomers are working, there was more money going into the SS Trust
Fund than being paid out in benefits (building reserve for baby boomers'
retirement). However, congress has been "borrowing" the money in SS
Trust Fund to pay for other things. As baby boomers retire ... the
amount paid into the SS Trust Fund will significantly drop and benefits
significantly increase (more going out than coming in). This will
require that congress raise taxes on the following generation to pay
back the amount that has been looted from the trust fund ... in order to
pay back the missing money (currently $2.8T) from the trust fund.

Some in congress will propose all sorts of alternatives ... so that the
missing money won't have to be replaced (via increased taxes on the
following generation, increasingly being spun as old people taking from
the young ... when it is actually having the young replace the
congressional financial manipulation). There is actually two parts,
increased new taxes: 1) replace the funds looted from the Trust Fund to
cover the baby boomer benefits 2) pay for spending that had previously
been covered by looting the Trust Fund.

Looking at it from different point of view, throwing paygo under the bus
in 2002 allowed $6T reduction in tax revenue and $6T increase in
spending (CBO report 2003-2009; total Federal debt has now exploded to
$18T with interest pushing half trillion) ... and the economic mess with
over $27T in securitized loans (because of lack of regulation) can be
viewed as somewhat independent activity (happening concurrently)
https://en.wikipedia.org/wiki/PAYGO

Need to cut spending by around $1T/annum and eliminate the tax loopholes
sold to wealthy and corporations increasing taxes by $1T/annum
... returning to PAYGO fiscal responsible balance budget (before
congress dropped PAYGO and went crazy selling tax loopholes and
increasing spending in 2002). However, it also needs increasing taxes by
approx. another $.5T/annum to cover the debt interest payments and
increasing taxes by approx $.5T/annum for paying off the exploded debt
(@$.5T/annum take 36yrs to clear). Total increase in taxes then is
$2T/year (combination of eliminating the tax loopholes sold to wealthy
and corporations, paying off tax debt created after dropping PAYGO, and
interest payments on the tax debt).

--
virtualization experience starting Jan1968, online at home since Mar1970

rationality

Anne & Lynn Wheeler <lynn@garlic.com> writes:
Looking at it from different point of view, throwing paygo under the bus
in 2002 allowed $6T reduction in tax revenue and $6T increase in
spending (CBO report 2003-2009; total Federal debt has now exploded to
$18T with interest pushing half trillion) ... and the economic mess with
over $27T in securitized loans (because of lack of regulation) can be
viewed as somewhat independent activity (happening concurrently)
https://en.wikipedia.org/wiki/PAYGO

rationality

Greymaus <mausg@mail.com> writes:
Back in the '70s, I think, there was a rule that people dealing with the
Stock Market were limited to something like 1% actual cash, until Friday
night when everything had to be paid for. Dim memory of what happened to a
person I knew.

Griftopia has chapter on CFTC with rule that people playing in
commodities had to have significant position because speculators created
wild irrational price swings. Then there were 19 secret letters that
went to specific speculators which resulted in wild irrational price
swings (they would bet on going up and then push things up ... and then
bet on things going down and then push things down) ... including huge
spike in oil (& gasoline) summer of 2008. Later member of congress
published the summer of 2008 transactions showing those responsible ...
somehow they then got the press to criticize him for violating corporate
privacy (misdirection away from the speculators) some past posts
http://www.garlic.com/~lynn/submisc.html#griftopia

Securitized mortgages (CDOs) had been used during the S&L crisis to
obfuscate fraudulent mortgages (posterchild were office bldgs in
Dallas/Ft.Worth area that turned out to be empty lots. In the late 90s,
I was asked to look at improving the integrity of supporting documents
as countermeasures. Then loan originators were securitizing
loans&mortgages (CDOs) and paying for triple-A ratings (when both the
sellers and the rating agencies knew they weren't worth triple-A, from
Oct2008 congressional testimony). Triple-A rating trumps supporting
documentation and they can start doing no-documentation liar
loans. Being able to pay for triple-A eliminated any reason for loan
originators to care about borrowers' qualifications or loan quality,
they could sell off (all loans as fast as they could be made) to
customers restricted to dealing in "safe" investments (like large
pension funds, claim is it accounts for 30% loss in funds and trillions
shortfall for pensions), largely enabling being able to do over $27T
2001-2008
Evil Wall Street Exports Boomed With 'Fools' Born to Buy Debt
http://www.bloomberg.com/apps/news?pid=newsarchive&refer=home&sid=a0jln3.CSS6c

If triple-A rated toxic CDOs weren't bad enough, they then start
creating securitized mortgages designed to fail, pay for the triple-A
rating and sell off to their customer/victims and then take out CDS
gambling bets that they would fail (creating enormous demand for dodgy
mortgages). Later the largest holder of the CDS gambling bets was AIG
... who was negotiating to payoff at 50-60 cents on the dollar when the
sec. of treasury steps in, forces them to sign a document that they
can't sue those making the CDS gambling bets and to take TARP funds to
payoff at face value (AIG is the largest recipient of TARP funds and the
firm formally headed by the sec. of treasury is the largest recipient of
face-value payoffs, which also happened to be one of the major commodity
speculators referenced by griftopia).

--
virtualization experience starting Jan1968, online at home since Mar1970

Gene Amhdahl Dies at 92

starsoul@MINDSPRING.COM (Lizette Koehler) writes:
Gene Amdahl, who helped IBM usher in general-purpose computers in the 1960s and
challenged the company's dominance a decade later with his eponymous machines,
has died. He was 92.
He died on Nov. 10 at Vi at Palo Alto, a continuing care retirement community in
Palo Alto, California, his wife Marian Amdahl said in a telephone interview. The
cause was pneumonia, and he had Alzheimer's disease for about five years.

ACS was shutdown after ibm management decided it would advance state of
the art too fast and they could loose control of the market. Talks about
ACS features that finally show up in ES/9000 more than two decades
later. Also references multithreading patents. I had gotten sucked into
a project that was looking at multithreading 370/195 ... which never
shipped.

Early 70s, there was FS project that was completely different than
360&370 and was going to completely replace 360/370 ... and internal
politics was shutting down 370 efforts. Lack of 370 products during this
period is credited with giving clone processors market foothold.
http://people.cs.clemson.edu/~mark/fs.html

some past FS posts
http://www.garlic.com/~lnn/submain.html#futuresys

--
virtualization experience starting Jan1968, online at home since Mar1970

IMPI (System/38 / AS/400 historical)

Quadibloc <jsavard@ecn.ab.ca> writes:
So, despite System i currently being made up of Power PC chips, IBM may be
planning a comeback for this technology... in the future. After all, we're not
in the Future yet - no flying cars!

Circa 1980, IBM was going to move multitude of internal microprocessors
to 801/risc Iliad chips ... 370 low & mid-range, as/400 (follow-on to
s/38), numerous control processors, etc. For what ever reason these
projects floundered ... and fell back to traditional CISC, there was
quick&dirty effort to do CISC chip for as/400.

There was also 801/risc ROMP chip that was going to be used for
follow-on to displaywriter ... and the project was canceled. They then
looked around and decided to retarget it to the unix workstation market
... which required some changes to the chip. They then hire the company
that had done the AT&T unix port for PC/IX ... to do one for ROMP ...
which is announced as AIX as PC/RT. RIOS chipset was then done for
follow-on to PC/RT as RS/6000. Part of 801/risc was no provision for
cache consistency ... typical related for multiprocessor operation.
RS/6000 was large power hungry five chip set. some poast 801, risc,
iliad, romp, rios, power/pc, etc posts
http://www.garlic.com/~lynn/subtopic.html#801

A joint effort with Motorola & Apple was created to do single chip
801/risc, that also supported cache concistency ... sort of using
motorola 88k cache consistency technology. The executive we reported
directly on our HA/CMP product, then went over to head up this new
organization.
https://en.wikipedia.org/wiki/AIM_alliance

over the next couple weeks, scaleup was transferred, announced as
supercomputer and we were told we couldn't work on anything with more
than four processors.

press

17Feb1992 article cluster scaleup announced for " scientific and
technical ONLY " (part of the issue was mainframe DB2 people
complained if I was allowed to go ahead with the RDBMS stuff, it would
be at least five years ahead of what they were doing).
http://www.garlic.com/~lynn/2001n.html#6000clusters1

11May1992 article that national lab interest caught IBM by "surprise"
(even though I had been working with them off & on ... going back to
late 70s where they were looking at large computer cluster farm of
4341s)
http://www.garlic.com/~lynn/2001n.html#6000clusters2

--
virtualization experience starting Jan1968, online at home since Mar1970

IMPI (System/38 / AS/400 historical)

Mike Hore <mike_horeREM@OVE.invalid.aapt.net.au> writes:
Sort of true, though Frank Soltis' books make it clear that he had some
of the ideas before he was called into the FS effort, then he refined
his ideas on single-level store, then after he was kicked out for
interesting political reasons he further developed these ideas in the
S/38 design. So there was some crossover but the truth was a bit
complicated. So I don't think I'm disagreeing, just adding some detail.

much of single-level store came from tss/360 (with some academic
literature from the period).

tss/360 had performance issue that things page faulted single 4k block
at a time and application was disable/non-runnable during the page
fault.

at the univ. we got to work with tss/360 and cp/67 both on univ. 360/67
on weekends. spring of 1968 ... the IBM SE working with tss/360 and I
(working with cp/67) did interactive fortran edit, compile and execute
benchmark ... tss/360 running four simulated users had worse throughput
and interactive response than cp67/cms did with 35 users.

I continued to work on 360 (& then 370) stuff all during the FS period,
even periodically ridiculing their activities. One of the things I did
was paged mapped filesystem for CMS ... which I've characterized as
having learned all the things not to do from tss/360. I got something
like three times the throughput on moderate file i/o intensive
benchmarks than the standard CMS filesystem. some past posts
http://www.garlic.com/~lynn/submain.html#mmap

Throughput issue with TSS/360-like synchronous page fault wasn't a
problem in the S/38 (low-end) market. Also S/38 has been characterized
as simplifying things compared to FS ... one of the things it did was
scatter block allocate across all connected disk drives. As a result the
complete filesystem had to be backed up as single entity ... and any
single disk failure ... required the whole system to be restored as
integral operation.

rationality

"Osmium" <r124c4u102@comcast.net> writes:
Well, there you go. It's like buying a Hi_Fi setup, you can spend a
huge amount of money but that one is only 20% better than the one that
is 1,000 times cheaper.

IMO the Humvee and the F-35 are poster boys for a government run
amok. Eisenhower warned us about the military industrial complex , and
it happened.

I found the price for a Humvee, $220,000 for the current armored, kind
of, ones. I wasn't able to find the price of a WW II jeep but I would
be astonished if it was more than $1,000. It looks like my guess of
50:1 was too high considering inflation.

search: <how much does a humvee cost>

the original Humvee went out w/o armor. soldiers in the field were then
welding all sorts of non-approved stuff, then approved uparmored
vehicles were few and late. original $70k, uparmor package $146k
https://en.wikipedia.org/wiki/Humvee

last decade, cousin of white house chief of staff Card ... was dealing
with the Iraqis at the UN and was given evidence that WMDs had been
decommissioned, provides informatoin to cousin Card, Powell and others,
then gets locked up in military hospital .... "EXTREME PREJUDICE-- The
Terrifying Story of the Patriot Act and the Cover Ups of 9/11 and Iraq"
http://www.amazon.com/EXTREME-PREJUDICE-Terrifying-Story-Patriot-ebook/dp/B004HYHBK2/

How Private Contractors Have Created a Shadow NSA; A new cybersecurity
elite moves between government and private practice, taking state
secrets with them (also references oil rig company that was transformed
into one of the largest defense contractors after former SECDEF and
future VP becomes CEO, including no-bid contracts in Iraq)
http://www.thenation.com/article/how-private-contractors-have-created-shadow-nsa/

there was recent article about paid tens of billions including how the
company got paid for each convoy that went out ... and they were sending
out empty convoys and convoys sent out with little or no escort through
hostile country. search: cheney halliburton criminal

Compiler

0000000433f07816-dmarc-request@LISTSERV.UA.EDU (Paul Gilmartin) writes:
Well, yes. Something about core competency. Spend programming
resource on an optimizing compiler which can produce object code
faster, better, cheaper than redundant effort by human programmers.
And the next generation ISA can be exploited merely by recompiling,
not recoding.

modern compilers will have detailed knowledge of ISA and lots of
programming tricks/optimizations/techniques done by the very best
assembler programmers (compiler state-of-the-art is typically considered
having reached this point for most things at least by the late 80s).

One of the issues is C language has some ill defined & ambiguous
features that inhibits better optimization (that is possible in some
better defined languages).

minor reference (not only optimization issues but also bugs)
http://www.ghs.com/products/misrac.html
This flexibility comes at a cost however. Ambiguities in the C language,
along with certain syntaxes, consistently trip up even the best
programmers and result in bugs. For software developers, this means a
large amount of unexpected time spent finding bugs. For managers, this
often means the single largest risk to their project.

... snip ...

The original mainframe TCP/IP product was done in pascal/vs ... and
didn't have many of the programming bugs that have been epidemic in C
language implementations.

--
virtualization experience starting Jan1968, online at home since Mar1970

IMPI (System/38 / AS/400 historical)

Mike Hore <mike_horeREM@OVE.invalid.aapt.net.au> writes:
Thanks for all that interesting stuff about performance.

It's also interesting that Bob Evans in his memoirs thought that the
whole existence of the then GSD was a "strategic blunder" because it led
to the development of systems that weren't 360/370 compatible. He cites
tne loss of the midrange market share as going from 65% in 1970 to less
than 20% in 1980.

It's hard to ague with the figures, but I guess Frank Soltis and the
Rochester people might argue that

(a) They knew their small-business customers much better than the rest
of the organisation, who were big-business oriented, and just didn't get
it as far as small businesses were concerned.

(b) The whole reason the Rochester people were kicked off the FS project
was that the DoJ was threatening the breakup of IBM, which would have
led to GSD becoming a separate company in competition with IBM, and they
would have had to develop their own systems anyway.

(c) Very aguably, the drop in market share might have happened anyway,
as the whole field was changing rapidly.

Who was right? Maybe an exam question for a course on computer history??

as mentioned in past discussions about low&midrange 370 (4300s) and DEC
in the same market ... PCs and workstations started to take-over the
market by mid-80s (which also applies to the GSD market, also one of the
reasons why previously mentioned displaywriter followon was canceled).

past discussions were that 4300s sold about the same as DEC into that
market ... in single or small number unit orders. The big 4300
difference was large corporate orders of hundreds at a time. Start
late 70s there was big explosion in 4300 sales ... some old email
http://www.garlic.com/~lynn/lhwemail.html#43xx

There was anticipation that the 4361/4381 (follow-on to 4331/4341) would
continue to see the explosion in sales, but by that time the market was
moving to large PCs and workstations ... also hitting s/32, s/34, s/36
(as/400 initially was combined followon to both s/36 & s/38)

[CM] Coding with dad on the Dragon 32

Morten Reistad <first@last.name.invalid> writes:
The OSF/1 designers nevers saw the possibility to have multiple
personalites. These have a bad reputation after the "unix" retrofits
for VMS(Eunice?), Primos (Primix), etc, but if this were designed
in from the start, like AIX paritally has.

OSF/1 could have been both a multics-like OS and have a unix
top layer, but with some hypervisor-like functions for the extra
functionality that is unknown to the unix api and ui.

QNX has done this, with a "just another unix" interface and a
"real QNX" interface.

Well, perhaps some OSF/1 designers did.

the politics I saw was that it was response to AT&T/SUN tieup
https://en.wikipedia.org/wiki/Unix_wars
While this decision was applauded by customers and the trade press,
certain other Unix licensees feared Sun would be unduly advantaged. They
formed the Open Software Foundation (OSF) in 1988. The same year, AT&T
and another group of licensees responded by forming UNIX International
(UI). Technical issues soon took a back seat to vicious and public
commercial competition between the two "open" versions of Unix, with
X/Open holding the middle ground. A 1990 study of various Unix versions'
reliability found that on each version, between a quarter and a third of
operating system utilities could be made to crash by fuzzing; the
researchers attributed this, in part, to the "race for features, power,
and performance" resulting from BSD-System V rivalry, which left
developers little time to worry about reliability.[2]

IBM Palo Alto Science Center had been working with UCB to do port to
mainframe 370 ... when they got redirected to do the port to the PC/RT
(which came out as AOS, alternative to AIX).
http://www.garlic.com/~lynn/subtopic.html#801

had done (virtual machine) cp67. As the cp67/cms group grew, it split
off from the science center (on 4th flr, multics was on 5th flr) and
took over the IBM Boston Programming center (on the 3rd flr). As the
group expanded ... especially with work on cp67 morph to vm370 ... they
moved out to the vacant former SBC bldg in Burlington Mall.

lots of groups were directed to refocus on FS stuff and suspend 370
activity. When FS imploded, there was mad rush to get stuff back into
the 370 product pipelines. As part of that, the head of POK eventually
convinced corporate to kill the vm370 product, shutdown the Burlington
Mall group and move everybody to POK to work on MVS/XA (or otherwise
MVS/XA wouldn't ship on time some 6-7 years later).

They weren't going to tell the VM370 people until the last minute to
minimize the number that could escape the move to POK, however the
information managed to leak and some number managed to escape (there
was joke that head of POK was one of the largest contributors to DEC
VMS, some others showed up at Prime). There was also a witch hunt for
the source of the leak ... fortunately for me, nobody gave up the
source.

--
virtualization experience starting Jan1968, online at home since Mar1970

Economic Mess

Securitized mortgages (CDOs) had been used during the S&L crisis to
obfuscate fraudulent mortgages (posterchild were office bldgs in
Dallas/Ft.Worth area that turned out to be empty lots. In the late
90s, I was asked to look at improving the integrity of supporting
documents as countermeasures. Then loan originators were securitizing
loans&mortgages (CDOs) and paying for triple-A ratings (when both the
sellers and the rating agencies knew they weren't worth triple-A, from
Oct2008 congressional testimony). Triple-A rating trumps supporting
documentation and they can start doing no-documentation liar
loans. Being able to pay for triple-A eliminated any reason for loan
originators to care about borrowers' qualifications or loan quality,
they could sell off (all loans as fast as they could be made) to
customers restricted to dealing in "safe" investments (like large
pension funds, claim is it accounts for 30% loss in funds and
trillions shortfall for pensions), largely enabling being able to do
over $27T 2001-2008
Evil Wall Street Exports Boomed With 'Fools' Born to Buy Debt
http://www.bloomberg.com/apps/news?pid=newsarchive&refer=home&sid=a0jln3.CSS6c

However, four largest TBTF "banks" were still carrying $5.2T
"off-book" end of 2008 ... over 100 times that of Lehman ... allowed
to play by different rules
Bank's Hidden Junk Menaces $1 Trillion Purge
http://www.bloomberg.com/apps/news?pid=newsarchive&sid=akv_p6LBNIdw&refer=home

If triple-A rated toxic CDOs weren't bad enough, they had started
creating securitized mortgages designed to fail, pay for the triple-A
rating and sell off to their customer/victims and then take out CDS
gambling bets that they would fail (creating enormous demand for dodgy
mortgages). Later the largest holder of the CDS gambling bets was AIG
... who was negotiating to payoff at 50-60 cents on the dollar when
the sec. of treasury steps in, forces them to sign a document that
they can't sue those making the CDS gambling bets and to take TARP
funds to payoff at face value (AIG is the largest recipient of TARP
funds and the firm formally headed by the sec. of treasury is the
largest recipient of face-value payoffs, which also happened to be one
of the major commodity speculators referenced by griftopia).

and along with his wife, blocking CFTC regulating CDS gambling bets
... originally described as gift to ENRON. When head of CFTC proposed
regulating CDS gambling bets, was replaced by wife of #2, while he got
law passed preventing regulating CDS gambling bets, she then resigned
and joined ENRON board and on the financial audit committee.

Rhetoric in congress regarding Sarbanes-Oxley was that it would
prevent future ENRONs and guarantee that executives and auditors did
jail time for fraudulent financial filings. However it required SEC to
do something. Possibly because even GAO didn't believe SEC was doing
anything, it started doing reports of fraudulent financial filings,
even showing increase after SOX goes into effect (and nobody doing
jail time). It turns out that SOX also had provision for SEC to do
something about the rating agencies (which played pivotal role in the
economic mess) ... but little seemed to have been done there either.

Congress asked the person if new regulations were required. He said
that while new regulations might be required, much more important was
transparency and visibility (since SEC wasn't enforcing the existing
regulations). The current scenario when they do something is fine &
"deferred prosecution" ... so far the TBTF have total $300B of fines
since the economic mess. However, that $300B isn't just for the
fraudulent CDOs and CDSs, but also manipulating LIBOR & FOREX markets,
manipulating gold, oil and other commodities, money laundering for
terrorists and drug cartels, fraudulent foreclosures, etc. Given the
trillions involved, the $300B is just being viewed as cost of criminal
business by the TBTF.

Peter Flass <peter_flass@yahoo.com> writes:
We ran our administrative stuff on a remote IBM mainframe using CICS. We
wanted email, conferencing, and the ability to run some stuff for profs and
students.

my brother was regional marketing rep for Apple in the mid-80s (largest
physical area in CONUS) ... one of the things he figured was dialing in
remotely to the hdqtrs S/38 that had inventories, factory schedules and
deliveries ... being able to monitor product schedules.

undergraduate in the 60s, I was hired to support IBM mainframe software.
The univ. library had gotten ONR grant to do online catalog. Part of the
money went to getting 2321 datacell. The effort was also selected to be
betatest for the original CICS product. I was then tasked to support and
debug CICS (CICS had been developed at customer location and the library
chose some BDAM options/features different from the original customer
... which resulted in some bugs in how CICS do file OPEN). some past
CICS &/or BDAM posts
http://www.garlic.com/~lynn/submain.html#cics

... different "PROFS" ... I was blamed for online computer
communication/conferencing (precursor to social media) on the internal
network ... larger than arpanet/internet from just about beginning until
sometime mid-80s ... some past posts
http://www.garlic.com/~lynn/subnetwork.html#internalnet

There was internal group working on menu-based system for non-computer
literate users (especially managers and executives) which was released
as PROFS. They had picked up a lot of internal applications that were
imbedded in PROFS ... including a very early version of VMSG email
client. Then when the VMSG author offered them a much enhanced version,
the PROFS group tried to get him fired (they had claimed credit for all
parts of PROFS). Things quieted down after he showed that all PROFS
messages in the world contained his initials in non-displayed field.
Afterwards, the VMSG author only distributed the source to me and one
other person.

trivia ... at the time of the 1Jan1983 arpanet/internet cutover
to TCP/IP, it had approx. 100 IMP nodes and 255 connected hosts
... at the same time the internal network was rapidly approaching
1000 nodes .... old post with list of corporate locations that
added one or more nodes during 1983.
http://www.garlic.com/~lynn/2006k.html#8

in the early 80s, we had battle at SJR with corporate
auditors. Corporate wanted the 3270 logon screen to say "For Business
Purposes Only", search all computer files and eliminate all "demo"
programs (aka games). We got SJR to change the 3270 logon screen to say
"For Management Approved Uses Only", worked hard to get the corporate
business conduct guide (given to all employees) to equate personal
online files similar to privacy provisions given to personal locked
desk, and to support leaving "demo" programs available.

Corporate auditors were onsite ... doing things like after hours sweep
of bldg and offices looking for unsecured confidential information left
out. We had placed 6670 printers (computer connected IBM Copier3s) out
in all the deparment area with support for colored paper in the
alternate paper feed drawer ... that was used to print file separator
page. Since the page was nominally blank, the printing of the separator
page was modified to select a random entry from some files (ibmjargon,
and file with collected quotations). One of the print files left
out on departmental 6670 printer had the following:
[Business Maxims:] Signs, real and imagined, which belong on the walls
of the nation's offices:
1) Never Try to Teach a Pig to Sing; It Wastes Your Time and It Annoys the Pig.
2) Sometimes the Crowd IS Right.
3) Auditors Are the People Who Go in After the War Is Lost and Bayonet the Wounded.
4) To Err Is Human -- To Forgive Is Not Company Policy.

and the corporate auditors complained to management that we were
ridiculing them.

as an aside ... entry in ibmjargon referring to part of the
online computer communication:
Tandem Memos - n. Something constructive but hard to control;
a fresh of breath air (sic). That's another Tandem Memos. A phrase to
worry middle management. It refers to the computer-based conference
(widely distributed in 1981) in which many technical personnel
expressed dissatisfaction with the tools available to them at that
time, and also constructively criticised the way products were are
developed. The memos are required reading for anyone with a serious
interest in quality products. If you have not seen the memos, try
reading the November 1981 Datamation summary.

and had nearly 50 draft patents and would have some 1800+ claims (all
assigned, we got a couple thousand for each patent) ... and the patent
firm predicted well over 100 before we were done. Then some executive
looked at how much all the patents would cost (not just our awards, but
filling, both in US and international) ... and directed that all the
claims to be repackaged in nine patents. Later the patent office came
back and said that it was getting tired of the humongous patents where
the filing fee didn't even cover the cost of reading the claims ... and
directed that the claims had to be reorganized into at least 30 patents
(and corporate hdqtrs ruled that inventor awards were only for original
patents ... not for the reorganized "derivative" patents).

--
virtualization experience starting Jan1968, online at home since Mar1970

rationality

my uncle was tank mechanic for most of ww2 ... but shermans were very
vulnerable; the british referred to them as tommy cookers ... they
washed out the inside and tried to find the next crew. towards the end
they were shanghaiing cooks and potato peelers ... apparently what saved
my uncle was he was too big to fit inside.

when i was 8, my uncle first taught me to drive '38 1.5 ton chevy truck
(starter motor was peddle on the floor, no synchromesh, every gear was
double-clutch) and other vehicles ... mostly on dirt roads ... but I got
to drive on paved highways every once and awhile. on dirt roads with
deep ruts, you tried to drive just next to the ruts (also tended to be
smoother ride).

Department of Defense Head Ashton Carter Enlists Silicon Valley to
Transform the Military
http://www.wired.com/2015/11/secretary-of-defense-ashton-carter/
"Wooing Silicon Valley may prove easier than the battles Carter faces
on his home turf. The Pentagon is a bulging, labyrinthine, inefficient
organization with misaligned resources--the military has 25 percent
more real estate than it needs, for example, but not enough hackers."

... snip ...

... not to mention the military-industrial complex, Occupy the
Economy: Challenging Capitalism loc990-93:
President Eisenhower's comment about the military-industry complex is
fairly well known. Less known is the comment of Seymour Melman, who
was a Columbia University professor, who talked about the "permanent
war economy." Correct me if I'm wrong, but the weapons industry, which
is highly influential in the U.S., is extremely capital-intensive but
it actually doesn't provide many jobs. It's not labor-intensive.

Miniskirts and mainframes

Alan Bowler <atbowler@thinkage.ca> writes:
No. In both cases, it was timesharing users. However, I am cheating
a little. Gcos timesharing restricts individual timesharing users
to about 3/4 megabyte programs, and does not support character at a time
terminal I/O; basic line editing was (is) done in a front end, and
TSS does not seen the line until the user hits CR. So no screen
editors like VI which is what most of the VAX users were using.

Gcos users were mostly Fortran, and Pascal with some but there
was also B, APL, Algol, Cobol, assembler, Lisp, Snobol, Spitbol,
and later C. Gcos design encourages users to run long grindy big
programs as batch jobs so the heavy CPU stuff was no all trying to
run simultaneously.

What I was really objecting to in the prof characterization was
calling the VAX a mini. The 750 was as much a mainframe was the
DPS-8/49.

as i've perodically pontificated VAX & 4331/4341 sold into the same
mid-range market ... and in similar numbers ... at least for small
number orders ... big different for 4331/4341 were the large corporate
orders of hundreds of machines. as before ... 4361/4381 was followon to
4331/4341 and expected to see similar explosion in sales ... but as the
VAX sales numbers show ... by mid-80s, mid-range market was starting to
move to workstations & large PCs.
http://www.garlic.com/~lynn/2002f.html#0 Computers in Science Fiction

low-end & mid-range mainframes ... but large corporations were also
putting large numbers of 4300s out in departmental areas. However,
datacenter clusters of 4341 were less expensive than 3033, had higher
aggregate throughput, smaller floor & environmental footprint. In 1979,
I got sucked into doing benchmark for lab, looking at getting 70 4341s
for compute farm ... sort of the leading edge of coming supercomputers.
158 3031 4341

rain/rain4 was from Lawrence Radiation lab ... and ran on cdc6600 in
35.77 secs.

... snip ...

In the mid-80s I was working with director of NSF on interconnecting the
NSF supercomputer center (which morphs into the NSFNET backbone with
connection of regional networks, precursor to modern internet) as well
as large clusters of arbitrary mix of 370 & 801/risc chips. Old email
about having to choose between presenting to the NSF director and
getting somebody else to do it in order to attend a cluster meeing
http://www.garlic.com/~lynn/2011b.html#email850314http://www.garlic.com/~lynn/2007d.html#email850315

including working on cluster scaleup with both national labs as well as
RDBMS vendors for commercial ... referernce to commercial cluster
scaleup meeting in Ellison's conference room, Jan1992
http://www.garlic.com/~lynn/95.html#13

within a couple weeks of the Ellison meeting, cluster scaleup was
transferred, announced as supercomputer (for technical and scientific
*ONLY*), and we were told we couldn't work on anything with more than
four processors. some cluster scaleup email from the period
http://www.garlic.com/~lynn/lhwemail.html#medusa

IMPI (System/38 / AS/400 historical)

Charlie Gibbs <cgibbs@kltpzyxm.invalid> writes:
Not likely. The mayor's brother-in-law's construction company doesn't
build them that way, and they make too much money filling potholes.

I drove cross country in the winter to join science center in cambridge
... there was marked difference in I90 road quality crossing into mass
(mass turnpike) ... i made reference to frost heaves being worse on mass
turnpike than Idaho county roads in the rockies (i.e. little or no road
bed rather than 6ft deep needed as countermeasure to frost heaves).
Mass natives made jokes about interests required constant/reoccurring
road repair every year (relative small scale compared to the "big dig"
where claims were tha 90%, nearly $20B? disappeared into pockets).

Miniskirts and mainframes

Peter Flass <peter_flass@yahoo.com> writes:
Thee was a world of difference in software usability, both systems and
applications. I'd venture to say, partly based on your comments, that the
corporate users of 43xx machines used them to run dedicated applications,
developed centrally and distributed. In no way would I turn a branch
office staff loose on a 43xx. A VAX could be used that way, but was simple
enough to be managed, for the most part, by a fraction of a staff member's
time, and easy enough to use that people,could develop local applications
on it, to the dame extent that they might on a PC.

IBM SHARE user group made a point that VAX was simpler to manage than
VM370/CMS. If IBM had simplified VM370 further the claim rather than
4300s and VAXes selling about the same number of machines (except for
large corporate 4300 orders of hundreds at a time), 4300s would have
made significant further inroads into VAX sales.

... when POK was convincing corporate to kill VM370 and transfer all the
people to POK for MVS/XA (or otherwise MVS/XA wouldn't ship on
time). POK managed to get VM370 development group in Burlington Mall
closed and people transferred to POK ... but many escaped and stayed in
Boston area (joke that head of POK was one of the largest contributors
to VMS).

Endicott did finally manage to save the VM370 product mission ... but
had to reconstitute a development group from scratch. In Aug1976,
Tymshare had started making its vm370/cms online computer conferencing
facility available for free to the SHARE user group ... as VMSHARE ...
archives
http://vm.marist.edu/~vmshare

there are vmshare comments in the late 70s & early 80s about vm370/cms
code quality as Endicott is rebuilding a development group.

At the time of the great arpanet/internet cutover to internetworking
protocol on 1Jan1983, it had approx. 100 IMPs and 255 connected hosts
... while the internal network was rapidly approaching 1000 nodes. A
big part of this was explosion in vm/4341s going in all over the world
... including at branch offices.

nearly identical internal network technology was also being used for
BITNET ... the corporate sponsored univ. network ... which for a time
was also larger than arpanet/internet. Large univ. were installing
multiple vm/4341s around campus and departments ... usually requiring
some fraction of student employee ... typically required more than VMS
... but still much less than full time employee. BITNET ref
https://en.wikipedia.org/wiki/BITNETBITNET posts
http://www.garlic.com/~lynn/subnetwork.html#bitnet

above was sent to me somewhat as result of being blamed for online
computer conferencing on the internal network in the late 70s & early
80s.

The POK favorite son operating systems (MVS) wanted to play in the
explosion in 4300 sales, but had difficulty. The real problem was
typical MVS system required 10-20 fulltime support people (including
dedicated operators) ... which didn't scale for enterprise looking at
installing 1000 machines out in deparmental areas. However, they focused
on the only entry & mid-range disks were all FBA (3310&3370). For
whatever reason MVS failed to support FBA, only CKD (down to this day,
even though real CKD disks haven't been made for decades). The only CKD
were the high-end 3380s ... datacenter environmentals didn't play well
out in departmental areas. This eventually resulted in creating 3375 CKD
disk product (CKD architecture simulated on 3370 FBA disks) ... but had
little impact in MVS being able to penetrate the mid-range, departmental
market. Some CKD & FBA posts
http://www.garlic.com/~lynn/submain.html#dasd

there was a large pilot deployment in the US around the turn of the
century ... however it was in the Yes Card period ... the issue was it
was possible to use the same skimming exploits to collect information
for counterfeit magstripe card ... for making counterfeit chipcards.

Gov. LEOs did a description of Yes Card cases at an ATM Integrity
TaskForce meeting ... prompting somebody in the audience to exclaim that
they managed to spend billions of dollars to prove that chipcards are
less secure than magstripe.

In the wake of that, all evidence of the pilot evaporated w/o a trace
and speculation was that it would be a long time while things were tried
in the US again (waiting for more glitches to be worked out in other
jurisdictions).

The problem was 1) it was as easy to make counterfeit chipcards as
magstripe and 2) they had moved business rules out into the chip. A
chipcard terminal would ask the chip 1) was the correct PIN entered, 2)
should the transaction be done offline, 3) is the transaction within the
credit limit. A counterfeit Yes Card would answer "YES" to all three,
so didn't need to know the correct PIN and didn't need to do online
check with backend (and all transaction are approved). Traditional
countermeasure for counterfeit magstripe card is to deactivate the
account at the backend ... but that doesn't work with Yes Card

I had warned the people doing the pilot about the problems, but they
went ahead and did it anyway (they were myopically focused on
lost/stolen cards and ignored the counterfeit Yes Card scenarios).

disclaimer: in the mid/late 90s, I was asked to do a protocol&chip that
had no such vulnerabilities and was significantly more secure ... then
the transit industry also requested that it could also run contactless
within the power&time constraints of transit turnstyle (w/o any
reduction in security&integrity) ... have you seen how long these
transactions take? ... even when they are getting full contact power.

leading up to the conference ... we spent a lot of time with one of the
other companies including regular meetings with their CEO ... who in
prior life had been president of DSD (pok mainframe).

a lot of the work had started in the x9a10 financial standard working
group which had been given the requirement to preserve the integrity of
the financial infrastructure for ALL retail payments ... and as a
result it was required to not only work for point-of-sale ... but
ALL payments (including internet). The downside was that eliminating
much of the fraud&risk it commoditized payments and reduced
barriers to entry. '99 was also the year that GLBA passed (now better
known for repeal of Glass-Steagall), rhetoric on the floor of
congress was the (original) primary purpose of GLBA was to prevent new
entries into banking (especially prevent competition from entities
with much more efficient technologies).

IMPI (System/38 / AS/400 historical)

Andrew Swallow <am.swallow@btinternet.com> writes:
Lots of repairs are expensive. It may be cheaper to go for a design of
road that can go for say 20 years between resurfacing.

major US highways are designed for 18wheeler axle-ton mile load lifetimes
(modulo basic structural issues like frost heaves) ... other traffic is
effectively negligible:
603.1 Introduction

The primary goal of the design of the pavement structural section is
to provide a structurally stable and durable pavement and base system
which, with a minimum of maintenance, will carry the projected traffic
loading for the designated design period. This topic discusses the
factors to be considered and procedures to be followed in developing a
projection of truck traffic for design of the "pavement structure" or
the structural section for specific projects.

Pavement structural sections are designed to carry the projected truck
traffic considering the expanded truck traffic volume, mix, and the
axle loads converted to 80 kN equivalent single axle loads (ESAL's)
expected to occur during the design period. The effects on pavement
life of passenger cars, pickups, and two-axle trucks are considered to
be negligible.

Traffic information that is required for structural section design
includes axle loads, axle configurations, and number of
applications. The results of the AASHO Road Test (performed in the
early 1960's in Illinois) have shown that the damaging effect of the
passage of an axle load can be represented by a number of 80 kN
ESAL's. For example, one application of a 53 kN single axle load was
found to cause damage equal to an application of approximately 0.23 of
an 80 kN single axle load, and four applications of a 53 kN single
axle were found to cause the same damage (or reduction in
serviceability) as one application of an 80 kN single axle.

Clear up this C9 business (I hope)

Michael Black <et472@ncf.ca> writes:
But then there was the NEC V20, which was 8088 pin compatible, but had
the instruction set of the 80186. That worked out well I guess, the
added instructions of the 80186, but no extra hardware to get in the
way. But it could also be switched to become an 8080, to run the old
code. I know there was excitement at the time, but I'm not sure in
the long run how many actually used that feature. I think I have a
V20 somewhere around.

Securitized mortgages had been used during the S&L crisis to obfuscate
fraudulent mortgages (posterchild were office bldgs in Dallas/Ft.Worth
area that turned out to be empty lots). In the late 90s, I was asked
to look at improving the integrity of supporting documents as
countermeasures. Then loan originators were securitizing
loans&mortgages and paying for triple-A ratings (when both the sellers
and the rating agencies knew they weren't worth triple-A, from Oct2008
congressional testimony). Triple-A rating trumps supporting
documentation and they can start doing no-documentation liar
loans. Being able to pay for triple-A eliminated any reason for loan
originators to care about borrowers' qualifications or loan quality,
they could sell off (all loans as fast as they could be made) to
customers restricted to dealing in "safe" investments (like large
pension funds, claim is it accounts for 30% loss in funds and
trillions shortfall for pensions), largely enabling being able to do
over $27T 2001-2008
Evil Wall Street Exports Boomed With 'Fools' Born to Buy Debt
http://www.bloomberg.com/apps/news?pid=newsarchive&refer=home&sid=a0jln3.CSS6c

One of the first market freezes was muni-bond market when investors
began to realize that triple-A ratings were for sale and it might not
be possible to trust any ratings (Warren Buffett then stepped in and
started offering insurance to unfreeze the market).

Supposedly $700B TARP funds were appropriated for purchase of toxic
assets, however end of 2008 just the four largest too big to fail
were still carrying $5.2T off-book
Bank's Hidden Junk Menaces $1 Trillion Purge
http://www.bloomberg.com/apps/news?pid=newsarchive&sid=akv_p6LBNIdw&refer=home

... and TARP funds were actually used for other purposes and Federal
Reserve stepped in ... buying trillions at 98cents on the dollar and
providing tens of trillions in ZIRP funds. The FED fought long hard
legal battle to stop disclosing what it was doing and lost. Shortly
afterwards the FED Chair had press conference where he said that he
expected that the TBTF would use the ZIRP funds to lend to mainstreet,
but when they didn't, he had no way to force them (but that didn't
stop the ZIRP funds). However, supposedly one of the reasons he was
chosen as FED Chair was because he was student of the Depression,
where the same thing was tried with the same results ... so there
should have been no expectation of something different this time.

Jan2009 (decade after being asked to try and help prevent the economic
mess), I was asked to HTML'ize the Pecora Hearings (30s senate
hearings into the '29 crash, resulted in criminal convictions & jail
terms and Glass-Steagall) with lots of internal XREFs and URLs between
what happened then and what happened this time (references that the
new congress might have appetite to do something). I work on it for
awhile and then get a call saying it won't be needed after all
(references to enormous piles of wallstreet money totally burying
Washington).

Note in the late 90s, a Federal LEO described how some number of
investment bankers walked away clean from the S&L crisis and were then
currently running Internet IPO mills, variation on pump&dump ... but
new IPOs needed to fail so that the field would be clear for the next
round of IPOs. Comment was that they were then predicted to get into
mortgages next.

pg23/loc594-96:
The reputations of the great US credit ratings agencies had been built
up over the course of almost a century in rating bonds. The public
used these ratings as an indicator of the likelihood of default. In
the late 1990s and early 2000s, the ratings agencies took on
themselves a new task: not just of rating bonds, but also of rating
more complex securities, the new (complex) financial derivatives.

... snip ...

aka ... they could trade on their bond rating reputaton for selling
triple-A ratings on toxic CDOs. Note that being able to buy
triple-A ratings enabled loan originators not having to care about
loan quality or borrower's qualification, they could immediately sell
off every loan they could make. However, that wasn't enough, they then
started doing (triple-A rated) toxic CDOs designed to fail,
sell to their customers and then take out CDS gambling gets that they
would fail (creating enormous demand for bad loans).

AIG was largest holder of these CDS gambling bets and was negotiating
to pay off at 50-60cents on the dollar when the sec. of treasury steps
in and forces them to sign a document that they can't sue those making
the bets and to take TARP funds to pay off at 100cents on the
dollar. The largest recipient of TARP funds was AIG and the largest
recipient of face value payoffs was firm formally headed by the
sec. of treasury.

Phishing for Phools, pg39/899-900:
The CDSs played several roles in the financial crisis. AIG's holdings,
as large as they were, were still only 1 percent of the approximately
$57 trillion in the entire market.

... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

last decade, cousin of white house chief of staff Card ... was dealing
with the Iraqis at the UN and was given evidence that WMDs had been
decommissioned, provides information to her cousin, Powell and some
number of others, then gets locked up in military hospital
.... "EXTREME PREJUDICE-- The Terrifying Story of the Patriot Act and
the Cover Ups of 9/11 and Iraq"
http://www.amazon.com/EXTREME-PREJUDICE-Terrifying-Story-Patriot-ebook/dp/B004HYHBK2/

Enormous upswing in outsourcing to for-profit companies last decade
... quotes about 10% ... split between lobbyists and congress (illegal
for gov. agencies to lobby) ... periodic comments that congress is
most corrupt institution on earth ... single contract can be more
money than some country's annual budget.

Local DC press/news will periodically refer to washington politics as
Kabuki Theater ... what you see publicly has little or nothing to do
with what is really going on (including facade of conflict between
parties increases the funds). posts
http://www.garlic.com/~lynn/submisc.html#kabuki.theater

charlesm@MCN.ORG (Charles Mills) writes:
Now, in a sense, mainframes ARE getting faster. More cache. Higher
real memory limits and for Z, dramatically lowered memory prices. That
processor multi-threading thing. But especially, new instructions that
are inherently faster than the old way of doing things. Load and store
on condition are the i-cache's dream instructions! Lots and lots of
new "faster way to do things" instructions on the z12 and z13.

cache miss access to memory ... when measured in number of processor
cycles ... is compariable to 60s disk access time when measured in
number of 60s processor cycles. non-mainframe processors have been doing
memory latency compensation for decades, out-of-order execution, branch
prediction, speculative execution, hyperthreading, etc (aka waiting for
memory access increasing being treated like multiprogramming in the 60s
while waiting for disk i/o). Also, industry standard, non-risc
processors some time ago introduced risc micro-ops ... where standard
instructions were translated into risc microops for execution
scheduling.

mainframe implementations are more & more reusing industry standard
implementations, fixed-block disks, fibre-channel standard, CMOS,
etc. Half the per-processor performance improvement from z10->z196
playing catchup, is claimed to be introduction of some of these industry
standard memory access compensation technologies .... with further
additions in z12 (its not clear about z13 ... some numbers about total
system throughput compared to z12, is less than the increase in number
of processors ... possibly implying that per processor throughput didn't
increase or even declined).

early 70s, I got sucked into helping with hyperthreading effort
for 370/195 (that never shipped).

370/195 could run at 10MIPS, but most codes ran at 5MIPs. 195 had
our-of-order execution, but didn't have branch prediction or speculative
execution ... so conditiional branches stalled the pipeline (had to do
careful programming to get 10MIPS, but abundance of conditional branches
in most codes would keet machine to 5MIPS). Two i-streams running at
5MIPS would be able to keep machine running aggregate 10MIPS.

Idea was to have dual i-stream and registers ... but same single
pipeline and execution units ... with instructions flagged as to
i-stream. this discussion of end of ACS-360 includes references to
hardware muiltithreading patents (and red/blue bit tagging).
http://people.cs.clemson.edu/~mark/acs_end.html

other acs-360 reference/trivia ... Amdahl says that executives were
afraid that it would advance state-of-the-art too fast and company
would loose control of the market ... so acs-360 was shutdown. There
is also description of acs-360 features that show up over 20yrs later
in ES-9000.

disARMed

David Wade <dave.g4ugm@gmail.com> writes:
I think thats the approach IBM took with the z Series
mainframes. There ae various levels of microcode ansd millicode so
knowing what is actually as basic instruction can be interesting...

there are so few Z series being sold that they are increasingly
leveraging other technologies (with some differentiation layer on top).

z12 processor was 32nm technology ... I had calculated that based on
1qtr2014 sale numbers and straight remap 32nm to 14nm and 300mm to 450mm
wafers ... processor chips for several years of z mainframe sales could
be handled by a single wafer (little over decade ago I had chip done and
minimum wafer run was six wafers, assume that it is still something
similar).

First Single-Chip Out-of-Order Microprocessor?

Quadibloc <jsavard@ecn.ab.ca> writes:
Not having a cache on chip, and even having the MMU off-chip doesn't, in my
estimation, disqualify it as a single-chip microprocessor; the CPU is still
just on one chip. So the 88110 wasn't the answer I was looking for, even if
some other chip was even earlier.

Old HASP

Undergraduate in the 60s, univ. hired me fulltime to be responsible
for IBM mainframe systems ... this included going to SHARE. On route
to east coast SHARE, I took a side-trip from LaGuardia to Ithaca on
what I remember was DC3 (to see Bill Worley to talk about a lot of the
mods. that I had made to HASP ). We were held in the plane on the
tarmac for an hour but still flew through the middle of thunderstorm
... and I got severely air sick. I stumbled off at Elmira, got a
rental car and found a motel to stay in ... and then drove to Ithaca
the next day. ... worley/cornell/hasp ref
http://www.worldcat.org/title/cornell-hasp-ie-houston-automatic-spooling-priority-system-for-the-ibm-360/oclc/63226490

In former life my wife was in the gburg JES group, one of the LASP
"catchers" for jes3, and one of the JESUS (jes unified system) spec
coauthors ... this was mid 70s ... until Cannavino con'ed her into
going to POK to be in charge of loosely-coupled architecture.

In POK, she did "shared data" architecture .... but didn't remain long
there; in part because there was little uptake of the architecture
(except for IMS hot standby) until SYSPLEX & parallel SYSPLEX ... and
because of constant battles with communication group attempting to
force her into using SNA/VTAM for loosely-coupled operation. There
would periodically be a (temporary) truce where the communication
group said she could do anything within datacenter walls, but they
owned everything that crossed the datacenter walls (but almost
immediately they would resume trying to force her into using
SNA/VTAM).
http://www.garlic.com/~lynn/submain.html#shareddata

Washington System Center in Gburg ... WTSC in POK, AFE System Center
in Shady Grove (near gburg). AFE hdqtrs was in Terrytown. EMEA hdqtrs
had been in westchester.

SE training used to be sort of apprentice program as part of large
group at customer installation. With the 23Jun1969 unbundling
announcement (starting to charge for software, SE services, etc), they
couldn't figure out how not to charge for apprentice SEs training
onsite at customer location.
http://www.garlic.com/~lynn/submain.html#unbundle

Response was HONE (hands-on network environment), a number of CP67
virtual machine datacenters with branch office access running guest
operating systems. CP67/CMS, internal network, invent GML (precursor
to SGML, HTML), etc had been done at the Science Center. The Science
Center also ported to APL\360 to CMS for CMS\APL. HONE also started
offering CMS\APL based sales&marketing support applications on HONE
... which came to dominate all HONE activity (and guest operating
system use dwindled away).
http://www.garlic.com/~lynn/subtopic.html#honeand
http://www.garlic.com/~lynn/subtopic.html#545tech

One of my hobbies was providing enhanced production operating systems
for internal datacenters ... including HONE ... and I would get called
in for things like installing HONE-clones at new locations around the
world. One of the first was EMEA hdqtrs move from NY to Paris ... and
a HONE installation for EMEA hdqtrs went into La Defense.

In the mid-70s, the US HONE datacenters were consolidated in Palo Alto
(trivia, when FACEBOOK moved to silicon valley it was new bldg, next
door to the old HONE datacenter). In the late 70s, HONE VM370 (HONE
had moved from CP67 to VM370) was enhanced for loosely-coupled cluster
support (largest single-system in the world, load-balancing and
fall-over recovery, this support didn't ship to customers until
30years later). In the early 80s, US HONE was replicated in Dallas and
then a 3rd in Boulder for geographic survivability in case of whole
datacenter failure.
http://www.garlic.com/~lynn/submain.html#available

One of the people that my wife worked with on catcher for LASP (to
JES3) recently passed. I had gotten question from customer (who was
updating some wiki entries) about early os/vs2 and virtual memory
... and redistributed it to some IBMers from the period. This is
(archived) post with some of his recollections
http://www.garlic.com/~lynn/2011d.html#73 Multiple Virtual Memory

--
virtualization experience starting Jan1968, online at home since Mar1970

In the early 80s there was Washington Post article calling for 100%
unearned profit tax on the US auto industry ... import quotas were
supposedly to provide US auto industry with significant profits
(reduced competition allowing significant price increases) that would
be used to remake the industry; however they just pocketed the profits
and continued business as usual.

In 1990, the US auto industry had the C4 taskforce to look at
completely remaking themselves and because the were planning on
heavily leveraging technology, major technology vendors were invited
to send representatives. some past posts
http://www.garlic.com/~lynn/submisc.html#auto.c4.taskforce

In the meatings they had detailed descriptions about what the foreign
makers were doing and what US makers needed to change. One of the
detailed descriptions was the auto industry had been taking 7-8yrs to
go from start to rolling off the line (and were typically running two
efforts in parallel offset by a couple years ... so something new was
coming out every 3-4 years instead of 7-8 years). Foreign imports had
cut the process in half (3-4yrs) and were in the process of cutting it
in half again (18-24months). Foreign imports could respond to changing
buying habits or changing technology much more quickly than US
makers. Offline I would chide my IBM "mainframe" brethren (that were
attending) how could they expect to help (since they had similar
schedule, 7-8yr product cycle, with two efforts running in parallel
offset by a couple years)? Silicon Valley chip products were on path
towards 18-24month.

What's Worked in Computer Science

Bengt Larsson <bengtl12.net@telia.NOSPAMcom> writes:
Isn't it simpler to say simply that a RISC architecture is explicitly
designed to be a good target for a compiler, but CISC designs had a
strong component of looking natural to an assembly-language
programmer.

the 801/risc presentations from 76/77 era was that compiler/software
technology would be leveraged to greatly simplifying hardware
implementation.

801/risc would have simple hardware implementation; compiler would
generate correct code, loader would only load valid compiled programs
... so it wasn't necessary to have hardware privileged/non-privileged
support; virtual memory could be handled with just 16 segment registers
because inline code could change segment values (in much the same way
adddress registers value can be changed, w/o needing kernel calls for
validating authorization), no cache coherency (loader that may operate
on instructions would have instructions to force d-cache lines to memory
and invalidate i-cache lines)

I've periodically claimed that 801/risc was at least partially motivated
to go to the opposite extreme of the (failed) FS effort with its complex
hardware implementation. FS reference
http://www.jfsowa.com/computer/memo125.htm

First Single-Chip Out-of-Order Microprocessor?

Quadibloc <jsavard@ecn.ab.ca> writes:
I have now found a source of more comprehensive information.

It doesn't list out-of-order implementations as such, and thus would exclude
the 88100, but instead lists the various processors which employed register
renaming.

After the POWER1, the next one is from 1992, and is whatever CPU was
used in the ES/9000 mainframe. And next year, IBM is again out in
front, with several single-chip PowerPC implementations using register
renaming.

ES/9000 in 1990 ... reference to end of ACS-360 ... Amdahl shutting it
down because executives were afraid it would advance state-of-the-art
too fast and company would loose control of the market
http://people.cs.clemson.edu/~mark/acs_end.html

also lists hyperthreading patents and ACS-360 features that show up in
ES/9000 more than 20yrs later ...

--
virtualization experience starting Jan1968, online at home since Mar1970

scott@slp53.sl.home (Scott Lurndal) writes:
Something we, here in the Silicon Valley, are quite aware of :-(.
They finally cleaned up the superfund site on Bernal (Fairchild/Nat
Semi, IIRC); but there is still plenty of groundwater remediation
occuring throughout the valley.

one of my co-workers that lived south of the fairchild bldg (affected by
fairchild ground water contamination) ... came down with cancer
attributed to that contamination.

shortly after new grocery store went in (where fairchild bldg use to
be). I was in the store and noticed that the ATM machine was open'ed and
being worked on. I went over and started talking to the person ... this
was after nations had taken over bank of america and changed its name to
bank of america. He said that nations was letting lots of the bank of
america people go. He said that he was assigned to go through all the
bank of america ATM machines (in the local area) and replace the
advanced, state-of-the-art, intrusion detection with some hack that
nations used (so all nationsbank ATM machines would be the same) ...
and when he was done, he would be let go to.

tandem had bought one of ATM machine technology providers (Atalla, in
part because tandem machines were extensively used in the
ATM/cashmachine networks and tandem had been bought by compaq) ... old
post referencing workshop that compaq/tandem/atalla had sponsored for
us:
http://www.garlic.com/~lynn/aepay3.htm#riskm

Happy Dec-10 Day!!!

mausg writes:
Familys that have fallen in `class' are a fertile ground for revolutionaries,
the `lumpenproletariate' vent their frustrations in family of social group.
China are following their own path, as opposed to Russia, the Chinese anti-
corruption drive at the moment is catching VeryImportantPeople. Contrast to
the UK-US answer, specially in the UK, where it seems to be a mark of status
that one can `walk' even with seeming good evidence of fraud.

Its amazing that many people regard Russia still as a Socialist country,
in an argument split, leftists still regard Russia as `good'.

A customer with 360/65 instantaneous loss of power. I was there only for a
couple hours to drop off some equipment. Later heard they lost a couple
disk packs.

separate from power failures can precipitate disk drive failure.

IBM CKD dasd had power loss failure mode ... where there wasn't enough
power to maintain memory contents ... but there was enough power left
for the controller to complete a write operation ... problem was that
the channel had stopped transferring data ... so the controller
continued writting all zeros. The result was that after recovery, a
subsequent read would show no errors ... for the record that had write
operation ("correctly") complete with all zeros (this was especially
troublesome when things like VTOC record was being written)

FBA introduced that a physical record would not be written unless all
data was available to correctly complete a write. This philosophy
continued for RAID (write "failure" either completes correctly or
at least results in error indication for subsequent read).

During the 80s, there was lots of work trying to figure out how to
retrofit such a fix to CKD dasd ... or at least provide a way for system
to recognize an incorrect trailing zeros write.

--
virtualization experience starting Jan1968, online at home since Mar1970

Original CMS filesystem from the mid-60s almost had a fix for this
... updated filesysem control information was written to new locations
... and then the MFD was rewritten pointing to the new version of
control information rather than old version. It worked for all writes
except for the MFD. The new CMS "EDF" filesystem in the 2nd half of the
70s went to a pair of MFDs. There was the current MFD and a write would
always be to the alternate MFD ... if completely corectly, the alternate
becomes the (new) current and the (old) current becomes the alternate.

A version number goes at the end of (EDF) MFD, on startup/recovery, both
MFD is read and the most recent one is used ... a power failure,
trailing zeros write would always result in that record appearing as
older than the other MFD.

In early 80s, I did a CP kernel filesystem ... including spool file
system that addressed the problem for CP also. My motivation was that I
had HSDT project and I needed VM370 spool file system that ran much
faster for RSCS driving multiple T1 (& faster) links (standard spool
file system typically got 5-32kbytes/sec ... I needed 3mbytes or better
throughput. some past posts
http://www.garlic.com/~lynn/subnetwork.html#hsdt

I did the implementation in vs/pascal running in virtual address space
but still managed to significant improvement over the standard
implementation done as part of vm370 kernel in assembler.

At one point, I thought I finally had a path to getting it picked up
through the corporate network backbone which nodes were moving to
multiple 56kbit links. However, this was about the time the
communication group was putting intense pressure on the corporate nework
to move to SNA ... technical people started being excluded from the
backbone meetings ... so they could focus on the pressure being applied
to move to SNA.

However, by that time I also was doing the throughput enhancements to
mainframe TCP/IP product (also implemented in vs/pascal). At the time,
the standard product got about 44kbytes/sec using nearly full 3090
processor. In some tuning tests of the "fixes" at Cray Research between
Cray and 4341 ... got channel speed throughput using only modest
amount of the 4341 processor (possibly 500 times improvement in
bytes moved per instruction executed). some past posts
http://www.garlic.com/~lynn/subnetwork.html#1044

other trivia ... I had also done a paged-mapped CMS filesystem
originallyh for CP67 ... and then later moved to VM370. In the early 80s
(on 3380 drives), side-by-side comparison of moderate i/o intensive
workload ... it would get three times the througput of standard CMS
filesystem. some past posts
http://www.garlic.com/~lynn/submain.html#mmap

one of my co-workers got Spinney's number and called him up. Chuck
told him he really needed to talk to Boyd. I then got con'ed into
sponsoring Boyd's briefings at IBM (and got to spend a lot of time
alone with John). past Boyd posts
http://www.garlic.com/~lynn/subboyd.html

In the briefings, John would say that military officers got WW2
training in rigid, top-down command&control structure ... which
required bloated officer corp (he would also contrast it with the WW2
German military). He observed that many of these former US military
officers were then starting to contaminate US corporate culture, with
their management style (rigid, top-down command & control and bloated
middle management). However about the same time, articles were
appearing claiming MBAs were starting to destroy US corporate culture
with their myopic attention to quarterly numbers.

trivia: I was blamed for online computer conferencing (precursor to
modern social media) on the internal network (larger than
arpanet/internet from just about the beginning until sometime mid-80s)
in the late 70s and early 80s. There was a lot of discussion then
about the myopic focus on short-term quarterly results.

Note in the wake of ENRON, the rhetoric in Congress was that
Sarbanes-Oxley would prevent future ENRONs and guarantee that
executives (and auditors) did jailtime. However there was joke that it
was just a large present to the audit industry (for what happened to
the industry and major audit houses in the wake of
ENRON). Sarbanes-Oxley required that SEC do something, and possibly
because even GAO didn't believe SEC was doing anything, it started
doing reports of public company fraudulent financial filings (which
also used to drive larger executive bonuses), even showing that the
fraudulent filings increased after Sarbanes-Oxley goes into effect
(and nobody doing jail time).
http://www.gao.gov/special.pubs/gao-06-1079sp/

Several articles having pointed out that even when the fraudulent
filings have been restated, there is never any recovery of the
inflated bonuses.

trivia: I was asked to participate in a 2004 financial meeting (in
Liechtenstein) of European CEOs and corporate presidents that focused
on effects of Sarbanes-Oxley spilling over into Europe (Liechtenstein
appeared to be hosting these meetings as part of getting off the
Treasury's money laundering black list). posts mentioning money
laundering
http://www.garlic.com/~lynn/submisc.html#money.laundering

pg464/loc9995-10000:
IBM was not the born-again growth machine trumpeted by the mob of Wall
Street momo traders. It was actually a stock buyback contraption on
steroids. During the five years ending in fiscal 2011, the company
spent a staggering $67 billion repurchasing its own shares, a figure
that was equal to 100 percent of its net income.

pg465/10014-17:
Total shareholder distributions, including dividends, amounted to $82
billion, or 122 percent, of net income over this five-year
period. Likewise, during the last five years IBM spent less on capital
investment than its depreciation and amortization charges, and also
shrank its constant dollar spending for research and development by
nearly 2 percent annually.

There is enormous pressure put on the victim companies to service the
extreme debt load (cutting corners and generating revenue every way
possible), side-effect is that over half of corporate defaults involve
private-equity victim companies. An especially attractive target has
been companies doing business with the US government (intensive
lobbying last decade has accelerated the rise of government
out-sourcing). Articles reference that private-equity victims include
those responsible for doing government security clearances (which were
just filling out the paper work and not actually doing the background
checks).

"Computer Wars: The Post-IBM World" Ferguson & Morris on failure of
IBM's Future System:
... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with sycophancy and make no
waves under Opel and Akers. It's claimed that thereafter, IBM lived in
the shadow of defeat

... and:
But because of the heavy investment of face by the top management, F/S
took years to kill, although its wrongheadedness was obvious from the
very outset. "For the first time, during F/S, outspoken criticism
became politically dangerous," recalls a former top executive.

The former AMEX president then leaves IBM to head up a major
private-equity company that does a LBO of the company that was
Snowden's employer (70% of the intelligence budget and over half the
people is at for-profit companies)

book "1984"--modern privacy

hancock4 writes:
IMHO, the big risk to privacy are the hidden and not so hidden
databases that may contain derogatory information about us,
be it accurate or not. For instance, the newspaper reported
that doctors have a database about difficult patients, and
landlords have a database about difficult tenants.

Journalists push for these kinds of things--lack of privacy--
because it makes their job easier, indeed, helps create
opportunities. If a Senator's son misbehaves at college,
now they can find out about it and report on it, something
that might have been buried years ago.

he glosses over a number of details like opt-in/opt-out ... I've posted
before about being somewhat involved in cal. data breach notification
and cal. opt-in legislation ... having been brought in to help word
smith cal. electronic signature legislation ... while they were also
working on the other two pieces. The data-breach notification
legislation passed although since then there has been a number of
federal bills (none yet passed), about evenly divided between those
similar to cal's and those that would effectively eliminate notification
requirement.

Before the opt-in bill passes, a (federal preemption) opt-out addenda
was added to GLBA (now better known for repeal of Glass-Steagall).
Opt-in requires explicit authorization before an organization can share
your personal information, out-out can share your personal information
unless it has record of your objecting (opting-out).

I've posted before about being at a 2004 annual national privacy
conference in Wash DC and in a panel session with all the FTC
commissioners, somebody in the room gets up and asks them if they are
going to do anything about GLBA "opt-out" ... he says he is with a call
center technology company used by all the major financial operations
... and all of their 1-800 "opt-out" services provide no provisions for
keeping a record of a "opt-out" call (and with no "opt-out" record, they
are free to share your personal information).

Ed was responsible for the internal network which was larger than the
ARPANET/INTERNET from just about the beginning until sometime in the
mid-80s I attribute much of this because Ed included a form of
gateway and distributed control in every node from the beginning. The
ARPANET/INTERENET didn't get this until the great 1JAN1983 cutover to
internetworking protocol. This replaced the tightly controlled,
centrally administrated IMPs. At the 1JAN1983 cutover there was
approx. 100 IMP network nodes (with approx. 255 connected hosts) at
the same time the internal network was rapidly approaching 1000
nodes. Part of the IMP folklore from the early 80s was that the 100
tightly controlled IMPs would periodically totally saturate all the
network links with administrative protocol chatter anytime there was
any significant event (part of the requirement to cutover to
internetworking protocol was that the existing IMP technology couldn't
scale).

slightly related trivia, we were working with director of NSF on
interconnecting the NSF supercomputer centers and were suppose to get
$20M ... related to my HSDT project ... some old email
http://www.garlic.com/~lynn/lhwemail.html#hsdt

Then congress cut the budget and some number of other things happen,
and finally they release RFP (largely based on stuff HSDT already had
running). Internal politics prevents us from bidding. The director of
NSF tries to help, writting the company a letter, copying the CEO
... but that just makes the internal politics worse. As regional
neworks connect into these centers, it morphs into the NSFNET
backbone, precursor to the modern internet. some old email
http://www.garlic.com/~lynn/lhwemail.html#nsfnetrelated article
http://www.technologyreview.com/featuredstory/401444/grid-computing/

in the "old" days ... there use to be a rash of questions every
Sept. from freshmen just exposed to Usenet &/or internet for the first
time ... looking to get homework done for free. Then initially with
rise of AOL ... the homework questions started coming from younger and
year around

After transferring to SJR, I got con'ed into playing disk engineer
across the street. The disk engineering lab was running prescheduled
stand alone mainframe testing 7x24 around the clock for testing. They
had tried running MVS for concurrent testing ... but found MVS had
15min MTBF in that environment. I offered to rewrite IOS to make it
bullet proof and never fail, enabling any amount of concurrent
on-demand testing ... greatly increasing productivity. I produced a
research report about the enhancements and happened to mention the
15min MTBF MVS. Later I got a call from somebody in the POK MVS group,
I thought at first that they might want to know about all the
enhancements, but they just wanted to know who my manager was. Later I
find out that they were more interested in getting me separated from
the corporation. When they couldn't, I was told that they would do
their best to make career unpleasant (including no corporate awards).

Old email reference a little time later ... for 3380, FE had 57
regression test of 57 typical errors to be expected, MVS was still
failing in all cases requiring re-IPL, and in 2/3rds of the cases
there was no indication of what caused the failure
http://www.garlic.com/~lynn/2007.html#email801015

we spent a lot of time looking at various failure scenarios. One of
the buzz words was "telco provisioning". There was a datacenter in
manhattan that was carefully chosen to be in building that had
multiple power feeds from different substations on different sides of
the building, multiple telco feeds from different exchanges on
different sides of the building, and multiple water main feeds on
different sides of the building. It was taken out when a transformer
in the building exploded, contaminating the building with PCB and the
building had to be evacuated.

As part of HA/CMP, we also worked on geographically separated
operations and when I was out marketing HA/CMP, I coined the terms
disaster survivability and geographic survivability (to
differentiate from disaster/recovery). I then got asked to write a
section for the corporate continuous availability strategy
document. It got removed when both rochester (as/400) and POK
(mainframe) complained that they couldn't meet the requirements.
http://www.garlic.com/~lynn/submain.html#available

--
virtualization experience starting Jan1968, online at home since Mar1970

One of the baby bells did a PU4&PU5 emulation on Series/1 running on
top of distributed control and real networking underneath with
enormous more function than standard SNA. It was well known that the
communication group was famous for corporate dirty tricks ... so
approached the largest customer of 32x5 boxes to completely fund the
internal costs (they showed that they would totally recover all costs
within 9months moving over to such a type-1 product from real
32x5). What the communication group did next can only be described as
truth is stranger than fiction. Part of presentation that I gave at
fall1986 SNA architecture review board meeting in Raleigh.
http://www.garlic.com/~lynn/99.html#67

Other trivia ... very early in the days of SNA architecture, my wife
and another person (who later went on to be an SRI instructor)
authored AWP39 peer-to-peer networking architecture (they had to use
the "peer-to-peer" qualifier to indicate real networking ... since the
communication group had co-opted the plain "network" term for SNA
which is a communication protocol and doesn't have a network layer).

Much later the author of AWP164 (which is released as APPN) and I are
reporting to the same executive ... I periodically needle him to stop
playing with the communication group and work on real networking. When
APPN is getting ready for announce, the communication group
non-concurs and there is a couple month escalation period
... eventually resulting in the APPN announcement letter being
carefully written to avoid implying that there is any relationship
between APPN and SNA

--
virtualization experience starting Jan1968, online at home since Mar1970

when I was undergraduate, I was brought into boeing as fulltime
employee (to help with creating BCS, computing moved into independent
business unit to better monetize the investment) ... and at the time
Renton datacenter had upwards of $300M in IBM equipment (in 60s
dollars, that summer there was constant flow of 360/65s staged in the
Renton hallways awaiting installation), which I thot might be the
largest in the world. They were looking at duplicating it in Everett
because there was disaster scenario where Mt. Rainer heats up and
there is massive mud slide that takes out the Renton datacenter (some
claim that being w/o the Renton datacenter for a week would cost the
company more than the cost of the Renton datacenter). That summer
747-3 was flying the skies of seattle getting FAA certification.

Later I'm sponsoring John Boyd's briefings at IBM ... and turns out
that he commanded "spook base" about the same time I was at Boeing
... which is claimed to have been a $2.5B "windfall" for IBM (again in
'60s dollars, supposedly had the largest air conditioned building in
that part of the world)). posts mentioning Boyd
http://www.garlic.com/~lynn/subboyd.html

Note that many of the chips supporting older instruction sets ... now
have hardware layer translating instructions into RISC micro-ops for
scheduling and execution. It adds slight latency for any specific
instruction but greatly increases the execution throughput of the chip
(so there is now little throughput different between such chips and
"real" RISC chips). The big throughput issue are things like
super-scalar, out-of-order execution, speculative execution, branch
prediction, hyperthreading, etc ... that help mask cache miss and
memory access latency. Existing memory access latency ... when
measured in count of processor cycles ... is comparable to 1960s disk
access time when measured in count of 1960s processor cycles

--
virtualization experience starting Jan1968, online at home since Mar1970

Rhetoric in congress was that Sarbanes-Oxley was going to prevent
future ENRONs and guarantee that executives (and auditors) did
jailtime, but it required SEC to do something. However, at the time
there were jokes that it was actually just a huge gift to the audit
industry. Possibly because even GAO didn't believe SEC was doing
anything, it started doing reports of public company fraudulent
financial filings, even showing that they increased after
Sarbanes-Oxley goes into effect (and nobody doing jailtime).
posts mentioning Sarbanes-Oxley
http://www.garlic.com/~lynn/submisc.html#sarbanes-oxley

from above:
The database consists of two files: (1) a file that lists 1,390
restatement announcements that we identified as having been made
because of financial reporting fraud and/or accounting errors between
July 1, 2002, and September 30, 2005, and (2) a file that lists 396
restatement announcements that we identified as having been made
because of financial reporting fraud and/or accounting errors between
October 1, 2005, and June 30, 2006.

... snip ...

various reports was that part of the motivation for the fraudulent
financial reporting was increasing bonuses of executives ... however,
even after financial restatements, inflated bonuses weren't clawed
back.

The "joke" about gift to the audit industry ... was that
Sarbanes-Oxley required corporations to significantly increase
spending on audits ... but apparently no intention of seeing
executives and auditors in jail ... aka a congressional Kabuki Theater
moment (what you see publicly has little to do with what really goes
on)
http://www.garlic.com/~lynn/submisc.html#kabuki.theater

--
virtualization experience starting Jan1968, online at home since Mar1970

Mid-80s corporate was projecting that sales would shortly double (from
something like $60B to $120B) .... mostly based on mainframe business
... there was massive internal building program to double mainframe
manufacturing capacity and huge influx of "fast track" MBAs being
rapidly rotated thru mid-level positions (to the detriment of many
business units) ... presumably trying to double executive ranks. This
was at the time that mainframe business was starting to go in the
opposite directions ... and in a few years the company goes into the
red.

In high school, i worked some number of construction jobs and summer
after freshman year in collage, I was foreman on job with three
9-person crews. Shortly after joining IBM in 70, I was asked to be a
manager ... I asked to read the manager's manual over the weekend and
came back on Monday saying I didn't think my background of doing
attitude adjustment in the parking lot after work would be looked on
favorably by IBM.

Late 80s, senior disk engineer got a talk scheduled at the world-wide,
annual communication group conference supposedly on 3174 performance
and opened his talk with the statement that the communication group
was going to be responsible for the demise of the disk division. The
issue was that the communication group had stranglehold on datacenters
(trying to protect its dumb terminal paradigm install base) with its
corporate strategic ownership with everything that crossed datacenter
walls. The disk division was seeing data fleeing datacenters to more
distributed computing friendly platforms with drop in disk sales. The
disk division had come up with number of solutions to correct the
problem, but they were constantly being vetoed by the communication
group.

Ed was responsible for the internal network which was larger than the
ARPANET/INTERNET from just about the beginning until sometime in the
mid-80s. I attribute much of this because Ed included a form of
gateway and distributed control in every node from the beginning. The
ARPANET/INTERENET didn't get this until the great 1JAN1983 cutover to
internetworking protocol. This replaced the tightly controlled,
centrally administrated IMPs. At the 1JAN1983 cutover there was
approx. 100 IMP network nodes (with approx. 255 connected hosts) at
the same time the internal network was rapidly approaching 1000
nodes. Part of the IMP folklore from the early 80s was that the 100
tightly controlled IMPs would periodically totally saturate all the
network links with administrative protocol chatter anytime there was
any significant event (part of the requirement to cutover to
internetworking protocol was that the existing IMP technology couldn't
scale). posts mentioning internal network
http://www.garlic.com/~lynn/subnetwork.html#internalnet

slightly related trivia, we were working with director of NSF on
interconnecting the NSF supercomputer centers and were suppose to get
$20M ... related to my HSDT project ... some old email
http://www.garlic.com/~lynn/lhwemail.html#hsdt

Then congress cut the budget and some number of other things happen,
and finally they release RFP (largely based on stuff HSDT already had
running). Internal politics prevents us from bidding. The director of
NSF tries to help, writting the company a letter, copying the CEO
... but that just makes the internal politics worse. As regional
neworks connect into these centers, it morphs into the NSFNET
backbone, precursor to the modern internet. some old email
http://www.garlic.com/~lynn/lhwemail.html#nsfnetrelated article
http://www.technologyreview.com/featuredstory/401444/grid-computing/

The original mainframe TCP/IP product was implemented in VS/Pascal
... but had some throughput issues, getting 44kbytes/sec throughput
ussing full 3090 processor. I did the enhancements to support RFC1044
and in some tuning tests at Cray Research between Cray and 4341, got
sustained 4341 channel throughput using only modest amount of 4341
processor (possibly 500 times improvement in bytes moved per
instruction executed) ... some past posts
http://www.garlic.com/~lynn/subnetwork.html#1044

The communication group had tried all sorts of corporate dirty tricks
to block the activities. The internal network was not-SNA ... but in
the late 80s ... about the same time I was doing RFC1044 support, the
communication group was applying intense corporate political pressure
to convert the internal network to SNA.

The corporate sponsored university BITNET used the same technology as
the internal network ... and then converted to TCP/IP. It would have
been much more cost effective and efficient if the internal network
converted to TCP/IP also ... instead of SNA. some past posts
http://www.garlic.com/~lynn/subnetwork.html#bitnet

DOS descendant still lives was Re: slight reprieve on the z

jcewing@ACM.ORG (Joel C. Ewing) writes:
No (about the "free", not about the "dead for decades"), DOS/VS was the
last really free base (last version Release 34?). Perhaps technically
DOS/VSE was "free", as there didn't appear to be a monthly licensing
charge for DOS/VSE itself (Computerworld, April 30, 1979, p4), but in
the practical sense a production DOS/VSE system was definitely not free
as there were monthly support charges for DOS/VSE and separate monthly
licensing plus support charges for must-have VSE add-on components like
VSE/Power and others. DOS/VSE came out with the IBM 4331 & 4341
processors in 1979 and supported running in both S/370 mode or the
ECPS:VSE mode supported by the 4300 processor family.

various legal actions resulted in 23June1969 unbundling announcement
where (application) software & other stuff started to be charged for
(however they made the case the kernel software should still be free).
some past posts
http://www.garlic.com/~lynn/submain.html#unbundle

Endicott did something similar for e-architecture (4331 & 4341) tailored
for vs1&dos. In large part a single virtual address space supported as
part of the hardware architecture. Rather than having segment & page
tables ... there were two new instructions that told the machine what
virtual address was at what real address ... and invalidated the virtual
address.

However there was an enormous explosion in vm/4300 sales (before
announce, 4341s were referred to a "E4") ... which required multiple
virtual address space ... which met that large number of 4300s ran in
370 mode rather than e-mode. Note that POK had convinced corporate to
kill off the vm370 product and move all the development people to POK as
part of MVS/XA development (including excuse that MVS/XA would ship on
time, if they couldn't get the additional resources). Endicott manage to
save the vm370 product mission, but had to reconstitute a development
group from scratch. some old 4300 related email
http://www.garlic.com/~lynn/lhwemail.html#43xx

Note that VS1 and VM/370 "ECPS" was different than e-machine
architecture. It originated with the 138/148 (virgil/tully) ... where
selected high use kernel/system instructions paths were implemented in
microcode. The low & mid-range machine were vertical microcode
machines with an avg of 10 native instructions per 370 instruction
(somewhat analogous to mainframe emulators that run on Intel
platforms). Kernel instruction paths tended to get 10:1 performance
improvement when moved to microcode. I did the initial study and
effort for the VM/370 ECPS ... old post with results for selecting
pathlengths to be moved to microcode (I was told that I needed to
select the 6kbytes of highest executed kernel instructions, which
turned out to account of 80% of vm/370 kernel execution)
http://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist

trivia ... the methodology for selecting the VS1 paths wasn't nearly so
rigorous.

In any case, the mad rush to get stuff back into 370 product pipeline
contributed to decision to pick up various of my stuff and ship it in
products for customers. One part of that stuff (dynamic adaptive
resource manager) was selected to be guinea pig for starting to charge
for system/kernel software ... and I had to spend some amount of time
with lawyers & business types going over policies for system/kernel
software charging.

even more trivia: when 3033 looked at doing something similar to ECPS
... it didn't work out as well. 3033 was horizontal microcode machine
that had been optimized so it was executing nearly one 370 instruction
per machine cycle. Directly dropping system/kernel 370 pathlengths into
microcode could even result in running slower than the original 370.

--
virtualization experience starting Jan1968, online at home since Mar1970

As an undergraduate I do lots of work on cp67 (including to run in
256kbyte machine). The morph of cp67 to vm370, did a lot of
simplification of cp67 at the same time bloating the kernel size so
performance was seriously impacting running in 256kbytes. Boeblingon
does 115&125 ... and at one point I get dragged into optimizing vm370
to run on 256kbyte 125 customer machines.

boeblingon does 135 and 138. Then Boeblingon does 4331. Endicott had
con'ed me into doing lots of work on 148, vm/370 ecps. At the same
time I was helping Endicott with 148 vm370 ECPS, the 125 group also
asked me to do the design and specification for a 5-way 125 SMP
machine (which never shipped, it turns out the Endicott 148 people
felt threaten that 5-way 125 multiprocessor would impact their market
... which put me in odd position since I was doing both).

the product test group in bldg 15 would typically get the 3rd or 4th
engineering model for doing disk i/o testing. they got the 3rd
enginneering model of 3033 and very early E4 (4341) engineering machine
for testing. Because I was doing so much stuff for them, I would get
lots of time on bldg. 15 systems for other stuff I might want to do. The
performance test marketing group in Endicott con'ed me into doing
customer benchmarking on the bldg. 15 enginneering E4/4341 ... since I
had bettter access to the machine ... than they had to early engineering
machines in Endicott. some old email
http://www.garlic.com/~lynn/lhwemail.html#43xx

email includes references that when the E4/4341 originally arrived in
bldg. 15 ... it had the proceessor cycle slowed down (allowed machine to
work as they refined the engineering) so that the benchmarks were not as
good as they could be. Later as they refined the machine, they were able
to crank down the processor cycle.

One of the benchmarks was for LLNL that was looking at getting 70 4341s
for a compute farm (sort of the precursor to modern cluster, GRID and
supercomputing). It showed 4341 was faster than 158&3031 and 4341
cluster was faster, cheaper, much less floor space and environmentals
than 3033. The cluster 4341 threat to 3033 was so big that at one point,
the head of POK got corporate to cut the allocation of critical 4341
manufacturing component in half.

other trivia: circa 1980, there was plan to move the large variety of
internal microprocessors to 801/RISC, including low&mid-range (vertical
microcode) 370s, what was to be as/400, lots of controllers, etc. For
various reasons that effort floundered and they continued with various
CISC microprocessors. I helped somebody in endicott with white paper
showing that VLSI was moving to the point, that large part of 370 could
be implemented in silicon ... as opposed to having to be emulated
... which would be much faster & price/performance than pure emulation
in 801/RISC (other side effect was that some number of 801/RISC
engineers leave and go to work on RISC projects at other vendors).

Boeblingon does 4361 (4331 follow-on) and Endicott does 4381 (4341
follow-on) in CISC. IBM was expecting that 4361/4381 would continue the
enormous 4331/4341 sales explosion, but by that time the mid-range
market was starting to move to workstations and large PCs.

Previous posts mentions in the wake of the FS failure, there was mad
rush to get stuff back into 370 product pipeline, this included 3033
(168 logic remap to faster chips) and 3081/xa kicked off at the same
time
http://www.jfsowa.com/computer/memo125.htm

Turns out during the 3033 engineering period, I was also involved in a
16-way 370 SMP effort and we con'ed some of the 3033 processor engineers
to work on it in their spare time. At first everybody thot it was really
great effort, and then somebody tells the head of POK that it could be
decades before MVS had effective 16-way support ... and he then invites
some of us to never visit POK again ... and tells the 3033 processor
engineers to stop being distracted by other activities. some SMP
posts
http://www.garlic.com/~lynn/subtopic.html#smp

With the 3033 out the door, the 3033 processor engineers start work on
trout1.5 (aka 3090, in parallel with ongoing 3081/xa) circa 1980. Part
of the 3090 effort was to use 4331 as service processor running a highly
modified version of vm370 release 6 ... and I periodically get dragged
into that effort. The 3090 service processor eventually gets upgraded to
pair of 4361s running highly modified version of vm370 release 6 ... a
couple (later) old email references
http://www.garlic.com/~lynn/2010e.html#email861031http://www.garlic.com/~lynn/2010e.html#email861223

Early days of REXX (well before ships to customers), I wanted to
demonstrate it wasn't just another pretty scripting language ... and
chose to rewrite IPCS all in REXX as demonstration. At the time, IPCS
was a enormous amount of assembler code. I set to rewrite IPCS all in
REXX, with ten times the function and ten times the performance, working
half time over three months. I finished early and so started doing
library of automated analysis that would look for typical failure mode
signatures.

I had expected that when REXX shipped to customers, my rewritten IPCS
would also ship to customers (it was by then in use by nearly all
internal datacenters and customer support PSRs). I did get permission to
make presentations on my rewrite of IPCS at customer user group meetings
(and within couple weeks, similar implementations started to appear at
customer shops). In any case, the above 3092 (aka 4361) email references
was about picking up my IPCS rewrite and shipping as part of standard
3092. Some past posts mentioning IPCS rewrite
http://www.garlic.com/~lynn/submain.html#dumprx

--
virtualization experience starting Jan1968, online at home since Mar1970

in the wake of FS and mad rush ... 303x was kicked off ... as mentioned
3033 was 168 logic remapped to 20% faster chips ... that happened to
have ten times more circuits per chip. Using original 168 logic, 3033
would have been only 20% faster than 168 (aka 3.6mips). However, some
specific logic rework to use the larger circuits per chip got it up to
50% faster than 168 (4.5mips).

158 manufacturing had been enormously automated ... somewhat like what
they quote for incremental cost of an automobile rolling off the
line. The 158 integrated channel microcode was used for the 303x channel
director (158 engine w/o 370 microcode and with the integrated channel
microcode). 3031 was two 158 engines, one with just the 370 microcode and
a 2nd (channel director) with just just the integrated channel
microcode. A 3032 is 168-3 reworked to use 303x channel director for
external channels.

also had run in 35.77 on CDC6600. 158 370 was slower than 3031 because
the (single) 158 engine was being shared between the 370 microcode and
the integrated channel microcode (which ran even when channels were
idle).

... I replaced the static table of supported 370 models with dynamic
code to determaine the characteristics ... it made it much simpler to
deploy a csc/vm production system to different machines (like
engineering models) not included in the shipped static table of
supported machines. some posts discussion work at the scientific center
(why it was called csc/vm). some past scientific center posts
http://www.garlic.com/~lynn/subtopic.html#545tech

Later I transfer to San Jose research ... on the san jose plant site
(accross the street from bldgs. 14&15) and csc/vm morphs into sjr/vm.
Old 4341 email about engineering model processor cycle time includes
reference to checking the DSPSL value ... which is one of my dynamically
determined values ... old reference
http://www.garlic.com/~lynn/2006y.html#email790220

from my dynamic adaptive resource manager ... which was guinea
pig for starting to charge for system/kernel software (customers
referred to as "fair share" since the default resource management
policy was "fair share") ... some past posts
http://www.garlic.com/~lynn/subtopic.html#fairshare

We were working with NSF director to interconnect the NSF supercomputer
labs and were suppose to get $20M. Then congress cuts the budget, some
number of other things happen, and finally NSF releases an RFP (based
largely on what we already have running). Internal politics prevent us
from bidding. The NSF director tries to help by writing the company a
letter copying the CEO, but that just makes the internal politics
worse. As regional networks connect into the sites, it morphs into the
NSFNET backbone (precursor to the modern internet). some old email
http://www.garlic.com/~lynn/lhwemail.html#nsfnet

During part of this, the communication group was spreading internal
misinformation about how they might be able to play. Somebody collects
a lot of their email and forwards it to us ... small part, heavily
snipped and redacted to protect the guilty:
http://www.garlic.com/~lynn/2006w.html#email870109

--
virtualization experience starting Jan1968, online at home since Mar1970

We were working with NSF director to interconnect the NSF supercomputer
labs and were suppose to get $20M. Then congress cuts the budget, some
number of other things happen, and finally NSF releases an RFP (based
largely on what we already have running). Internal politics prevent us
from bidding. The NSF director tries to help by writing the company a
letter copying the CEO, but that just makes the internal politics
worse. As regional networks connect into the sites, it morphs into the
NSFNET backbone (precursor to the modern internet). some old email
http://www.garlic.com/~lynn/lhwemail.html#nsfnetand
http://www.technologyreview.com/featuredstory/401444/grid-computing/

During part of this, the communication group was spreading internal
misinformation about how they might be able to play. Somebody collects
a lot of their email and forwards it to us ... small part, heavily
snipped and redacted to protect the guilty:
http://www.garlic.com/~lynn/2006w.html#email870109

Ed was responsible for the internal network which was larger than the
ARPANET/INTERNET from just about the beginning until sometime in the
mid-80s. I attribute much of this because Ed included a form of
gateway and distributed control in every node from the beginning. The
ARPANET/INTERENET didn't get this until the great 1JAN1983 cutover to
internetworking protocol. This replaced the tightly controlled,
centrally administrated IMPs. At the 1JAN1983 cutover there was
approx. 100 IMP network nodes (with approx. 255 connected hosts) at
the same time the internal network was rapidly approaching 1000
nodes. Part of the IMP folklore from the early 80s was that the 100
tightly controlled IMPs would periodically totally saturate all the
network links with administrative protocol chatter anytime there was
any significant event (part of the requirement to cutover to
internetworking protocol was that the existing IMP technology couldn't
scale).

slightly related trivia, we were working with director of NSF on
interconnecting the NSF supercomputer centers and were suppose to get
$20M ... related to my HSDT project ... some old email
http://www.garlic.com/~lynn/lhwemail.html#hsdt

The original mainframe TCP/IP product was implemented in VS/Pascal
... but had some throughput issues, getting 44kbytes/sec throughput
ussing full 3090 processor. I did the enhancements to support RFC1044
and in some tuning tests at Cray Research between Cray and 4341, got
sustained 4341 channel throughput using only modest amount of 4341
processor (possibly 500 times improvement in bytes moved per
instruction executed) ... some past posts
http://www.garlic.com/~lynn/subnetwork.html#1044

The communication group had tried all sorts of corporate dirty tricks
to block the activities. The internal network was not-SNA ... but in
the late 80s ... about the same time I was doing RFC1044 support, the
communication group was applying intense corporate political pressure
to convert the internal network to SNA. Then CJN meetings were
restricted to managers only in this period (afraid that technical
people might confuse the move to SNA with facts). Some old email:
http://www.garlic.com/~lynn/2006x.html#email870302http://www.garlic.com/~lynn/2011.html#email870306

The corporate sponsored university BITNET used the same technology as
the internal network ... and then converted to TCP/IP. It would have
been much more cost effective and efficient if the internal network
converted to TCP/IP also ... instead of SNA. some past posts
http://www.garlic.com/~lynn/subnetwork.html#bitnet

PC/RT had done their own 4mbit T/R card. For the RS/6000
(w/microchannel), they were told that they had to use PS2 microchannel
cards. The design point for the PS2 16mbit T/R cards was dumb terminal
emulation with 300+ sharing same network (very low throughput per
card). As a result a PC/RT server with 4mbit T/R card had higher
throughput than RS/6000 with 16mbit T/R card. posts mentioning 801/risc,
romp, rios, pc/rt, rs/6000, power, power/pc, etc.
http://www.garlic.com/~lynn/subtopic.html#801

The new almaden bldg. was heavily wired with CAT5 assuming it would be
used for 16mbit T/R ... however they found that 10mbit ethernet had
higher per card throughput, and the network had higher aggregate
network throughput and lower latency.

There was IBM report released in this time-frame showing ethernet
throughput ... which I can only assume that they used original 3mbit
prototype ethernet before listen before transmit. An ACM SIGCOMM paper
about the same time showed typical 10mbit ethernet configuration
getting 8mbit/sec effective throughput when all low driver software
were put in loop continuously transmitting minimum sized packets.

In 1988, I was asked to help LLNL standardize some serial stuff they
had, which quickly becomes fibre-channel standard. Later some POK
channel engineers become involved and they define a heavy-weight
protocol for fibre-channel that drastically cuts the native throughput
... eventually released as FICON.

IBM has published a z196 peak I/O throughput that uses 104 FICON to get
2M IOPS. About the same time a fibre-channel is announced for E5-2600
blade claiming over 1M IOPS (i.e. two such fibre-channel have higher
throughput than 104 FICON ... running over 104 fibre-channel). I have
yet to see more recent reports done for z12 and z13. some posts
http://www.garlic.com/~lynn/submisc.html#ficon

--
virtualization experience starting Jan1968, online at home since Mar1970

25 Years: How the Web began

hancock4 writes:
I remember Boardwatch magazine (and its opinionated editor). I
think he was based in Columbine which later gained notoriety.

pagesat did satellite broadcast of usenet ... i got pagesat receiver for
free in return doing ms/dos and a couple unix drivers for the pagesat
receiver ... and co-authoring an (june '93) boardwatch article.

"How long does a store halfword take?" used to be a question that had an
answer. It no longer does.

My working rule of thumb (admittedly grossly oversimplified) is
"instructions take no time, storage references take forever." I have heard
it said that storage is the new DASD. This is true so much that the z13
processors implement a kind of "internal multiprogramming" so that one CPU
internal thread can do something useful while another thread is waiting for
a storage reference.

Here is an example of how complex it is. I am responsible for an "event" or
transaction driven program. I of course have test programs that will run
events through the subject software. How many microseconds does each event
consume? One surprising factor is how fast do you push the events through.
If I max out the speed of event generation (as opposed to say, one event
tenth of a second) then on a real-world shared Z the microseconds of CPU per
event falls in HALF! Same exact sequence of instructions -- half the CPU
time! Why? My presumption is that because if the program is running flat out
it "owns" the caches and there is much less processor "wait" (for
instruction and data fetch, not ECB type wait) time.

so such accounting measuring CPU time (elapsed instruction time) is
analogous to early accounting which measured by elapsed wall clock time.

cache miss/memory access latency ... when measured in count of processor
cycles is comparable to 60s disk access when measured in in count of 60s
processor cycles.

There is lot of analogy between page thrashing when overcommitting real
memory and cache misses. This is old account of motivation behind moving
370 to all virtual memory. The issue was that as processors got faster,
they spent more and more time waiting for disk. To keep the processors
busy, required increasing levels of multiprogramming to overlap
execution with waiting on disk. At the time, MVT storage allocation was
so bad that a region sizes needed to be four times larger than actually
used. As a result, a typical 1mbyte 370/165 would only have four
regions. Going to virtual memory, it would be possible to run 16 regions
in a typical 1mbyte 370/165 with little or no paging ... significantly
increasing aggregate throughput.
http://www.garlic.com/~lynn/2011d.html#73 Multiple Virtual Memory

risc has been doing cache miss compensation for decades, out-of-order
execution, branch prediction, speculative execution, hyperthreading ...
can be viewed as hardware analogy to 60s multitasking ... given the
processor something else to do while waiting for cache miss. Decade or
more ago, some of the other non-risc chips started moving to hardware
layer that translated instructions into risc micro-ops for scheduling
and execution ... largely mitigating performance difference between
those CISC architectures and RISC.

IBM documentation claimed that half the per processor improvement from
z10->z196 was the introduction of many of the features that have been
common in risc implementation for decades ... with further refinement in
ec12 and z13.
z10, 64processors, aggregate 30BIPS or 496MIPS/proc
z196, 80processors, aggregate 50BIPS or 625MIPS/proc
EC12, 101 processor, aggregate 75BIPS or 743MIPS/proc

however, z13 claims 30% more throughput than EC12 with 40% more
processors ... which would make it 700MIPS/processor

as an aside, 370/195 pipeline was doing out-of-order execution ...
but didn't do branch prediction or speculative execution ... and
conditional branch would drain the pipeline. careful coding could keep
the execution units busy getting 10MIPS ... but normal codes typically
ran around 5MIPS (because of conditional branches). I got sucked into
helping with hyperthreading 370/195 (which never shipped), it would
simulate two processors with two instructions streams, sets of
registers, etc ... assuming two instruction streams, each running at
5MIPS would then keep all execution units running at 10MIPS.

In summer 1968, Ed Sussenguth investigated making the ACS-360 into a
multithreaded design by adding a second instruction counter and a second
set of registers to the simulator. Instructions were tagged with an
additional "red/blue" bit to designate the instruction stream and
register set; and, as was expected, the utilization of the functional
units increased since more independent instructions were available.

US Patent 3,771,138, J.O. Celtruda, et al., Apparatus and method for
serializing instructions from two independent instruction streams, filed
August 1971, and issued November 1973. Note: John Earle is one of the
inventors listed on the '138. "Multiple instruction stream
uniprocessor," IBM Technical Disclosure Bulletin, January 1976,
2pp. [for S/370]

... snip ...

Note the next sidebar is ES/9000 ... containing many features from
ACS-360 more than 20yrs later (Amdahl's account is that executives ended
ACS-360 because it would advance state-of-the-art too fast and they
would loose control of the market).

other trivia ... starting in the middle to late 70s, I started
pontificting that relative system performance of disks were declining
and by the early 80s, disk relative system performance had declined by a
factor of 10 times (order of magnitude) over a period of 15 years (disks
and gotten 3-5 times faster, but processors had gotten 50 times
faster). Disk division executives took exception to my statements and
assigned the division performance group to refute what I was
saying. After several weeks they came back and effectively said that I
had understated the problem. Their analysis was then respun into SHARE
presentation (B874) on optimizing disk configurations for system throughput
... old reference
http://www.garlic.com/~lynn/2006f.html#3and
http://www.garlic.com/~lynn/2006o.html#68

Is there a source for detailed, instruction-level performance info?

mike.a.schwab@GMAIL.COM (Mike Schwab) writes:
If branch predicting is a big hang up, the obvious solution is to
start processing all possible outcomes then keep the one that is
actually taken. I. E. B OUTCOME(R15) where R15 is a return code of
0,4,8,12,16.

Eager execution is a form of speculative execution where both sides of
the conditional branch are executed; however, the results are committed
only if the predicate is true. With unlimited resources, eager execution
(also known as oracle execution) would in theory provide the same
performance as perfect branch prediction. With limited resources eager
execution should be employed carefully since the number of resources
needed grows exponentially with each level of branches executed
eagerly.[7]

JO.Skip.Robinson@ATT.NET (Skip Robinson) writes:
'What a programmer is supposed to do' is avoid stupid code. We were once
tasked with finding the bottleneck in a fairly mundane VSAM application. It
ran horribly, consuming scads of both CPU and wall clock. It didn't take
long using an OTS product to discover that for every single I/O, the cluster
was being opened and closed again even though nothing else happened in the
meantime. Simply changing that logic slashed resource utilization.

In another case, we were on the verge of upgrading a CEC when the
application folks themselves discovered a few grossly inefficient SQL calls.
Fixing those calls dropped overall LPAR utilization dramatically.

What Tom and I are both saying is that focus on instruction timing should be
seen as more of an avocation than a serious professional pursuit. Like
playing with model trains at the expense of improving actual rail systems.
It's interesting, but not much real business depends on the outcome.

as "Peformance Predictor". Customer support people could gather system
details and provide activity profile information to the Peformance
Predictor and then asked questions about what will happen if
additional hardware was added or workload changed.

In the wake of corporate troubles of the early 90s, the company
unloaded a lot of resources (for instance, internal VLSI design tools
were given to outside company that specialized in selling such tools).

There was somebody that had acquired the rights to a descendant of the
Performance Predictor and had ran it through an APL->C
langugage convertor ... and used it for successful performance
consulting career. Around the turn of the century, I ran into him at
a large datacenter that had 40+ max. configured ibm mainframe
systems. The number of systems were sized to be able to do financial
transaction settlement in the overnight batch window ... all
running a 450k statement COBOL application.

The COBOL application had evolved over a couple decades and was under
the care of a large performance group that was constantly doing
hot-spot sampling and optimizing hotspots. The performance
predictor descendant was being used for identifying more global
issues ... which resulted in about 7% improvement (on an already
enormously optimized application).

I voluntered to do multiple regression analysis on higher level
activity data. This found a certain activity was accounting for 20+
percent processor use. Looking at it closer, they realized that it was
invoking a certain very expensive process three times ... when it only
needed to be invoked once ... correcting that resulted in 14%
improvement

rpinion@NETSCAPE.COM (Richard Pinion) writes:
Don't use zoned decimal for subscripts or counters, rather use indexes
for subscripts and binary for counter type variables. And when using
conditional branching, try to code so as to make the branch the
exception rather than the rule. For large table lookups, use a binary
search as opposed to a sequential search.

in late 70s we would have friday nights after work ... and discuss a
number of things ... along the lines of what came up in tandem memos
... aka I was blamed for online computer conferencing on the internal
network (larger than arpanet/internet from just about the beginning
until sometime mid-80s) in the late 70s and early 80. folklore is that
when the corporate executive committee were told about online computer
conferencing (and the internal network), 5of6 wanted to fire me. from
IBMJARGON:

[Tandem Memos] n. Something constructive but hard to control; a fresh of
breath air (sic). "That's another Tandem Memos." A phrase to worry
middle management. It refers to the computer-based conference (widely
distributed in 1981) in which many technical personnel expressed
dissatisfaction with the tools available to them at that time, and also
constructively criticized the way products were [are] developed. The
memos are required reading for anyone with a serious interest in quality
products. If you have not seen the memos, try reading the November 1981
Datamation summary.

... snip ...

one of the issues was that the majority of the people inside the company
didn't actually use computers ... and we thot things would be be
improved if the people in the company actually had personal experience
using computers, especially managers and executives. So we eventually
came up with the idea of online telephone books ... of (nearly)
everybody in the corporation ... especially if lookup elapsed time was
less than look up of paper telephone book.

avg binary search of 256k is 18 ... aka 2*18. Also important was there
were nearly 64 entries in physical block ... so binary search to the
correct physical block is 12 reads (i.e. 64 is 2**6, 18-6=12).

However, it is fairly easy to calculate the name letter frequency ... so
instead of doing binary search, do radix search (based on letter
frequency) and can get within the correct physical block within 1-3
physical reads (instead of 12). We also got fancy doing first two letter
frequency and partially adjusting 2nd probe, based on how accurate the
first probe was. In any case, binary search for totally unknown
distribution characteristics.

So one friday night, we established the criteria, to design, implement,
test and deploy the lookup program had to take less than a person week
of effort ... and less than another person week to design, implement,
test and deploy the process for collecting, formating and distributing
the online telephone books.

trivia ... long ago and far away ... a couple people I had worked with
at Oracle (when I was at IBM and working on cluster scaleup for HA/CMP),
had left and were at small client/server responsible for something
called commerce server. After cluster scaleup was transferred, announced
as IBM supercomputer, and we were told we couldn't work on anything with
more than four processors ... we decide to leave. We are then brought in
as consultants at this small client/server startup because they want to
do payment transactions on the server, the startup had also invented
this technology called SSL they want to use, the result is now
frequently called "electronic commerce".

TCP/IP protocol has session termination process that includes something
called FINWAIT list. At the time, session termination was relative
infrequent process and common TCP/IP implementations used a sequential
search of the FINWAIT list (assuming that there would be few or none
entries on the list). HTTP (& HTTPS) implementation chose to use TCP
... even tho it is a datagram protocol rather than a session protocol.
There was period in the early/mid 90s as web use was scaling up where
webservers saturated spending 90-95% of cpu time doing FINWAIT list
searches .... before the various implementations were upgraded to do
significantly more efficient management of FINWAIT (session termination)
process.

--
virtualization experience starting Jan1968, online at home since Mar1970

Between CISC and RISC

Quadibloc <jsavard@ecn.ab.ca> writes:
Of course, I do recall that the heat wall was why Intel went back to a
Pentium III-esque design in the Core series... and why we don't have
things that go over 4 GHz these days. Except for IBM's 5 GHz mainframe
chip!

IBM can throw lots of cooling & power at it ... also big cloud chip
customers are more&more looking for energy/instruction

The Source, was: 25 Years: How the Web began

danny burstein <dannyb@panix.com> writes:
The Source, at least when I was a customer, used "Prime" somethings.
No idea what..

They offered UPI news access as well as the Online Airline Guide.

I've mentioned before that a couple of people we had worked with at
Oracle on cluster scaleup ... mentioned in this Jan1992 meeting
in Ellison's conference room
http://www.garlic.com/~lynn/95.html#13

later have left and show up at small client/server startup responsible
for soemthing called commerce server. A few weeks after that meeting,
cluster scaleup is transferred, announced as IBM supercomputer and we
are told we can't work on anything with more than four processors ..
and we decide to leave. Later we are brought in as consultants
at the small client/server startup because they want to do payment
transactions (on the "commerce server"), the startup had also
invented this technology called "SSL" they want to use, the
result is now frequently called "electronic commerce".

They had moved into facility on middlefield road at the "T" with Ellis
street. We are also brought in to talk about "electronic commerce" with
"connect, inc" which is part way down Ellis st (towards 101). They had
online service that was based on massive amount of code on top of oracle
... where they added http&https support in relative short period of
time.

There is something in the back of my mind that they used a lot of Prime
computers ... but I can't find a reference at the moment. from long
ago and far away ...
Connect, Inc. is the first company to develop and market flexible and
scalable integrated online server software solutions. With nine years of
online experience, the company is at the forefront of providing high-end
software and services for Fortune-class enterprises wishing to conduct
complex business transactions securely on the Internet or other data
networks. CONNECT is a private company, founded in 1987 and
headquartered in Mountain View, Calif. For more information, telephone
1-800-262-2638 or access the CONNECT World Wide Web site at
http://www.connectinc.com. -0-

... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM executives were looking at 370/165 ... where typical customer had
1mbyte ... in part because 165 real memory was very expensive ... and
typical regions were such that they only got four in 1mbytes (after
system real storage requirement)

later, newer memory for 370/168 was less expensive ... and started to
see four mbytes as much more common ... aka four mbytes as 370/165 would
have met that typical MVT customer could have gotten 16 regions ... w/o
having to resort to virtual memory ... but the decision had already been
made.

basical initial transtion os/mvt to os/vs2 svs was MVT laid out in
single 16mbyte virtual address space ... and little bit of code to build
the segement/page tables and handle page faults. The biggest code hit
was adding channel program translation in EXCP ... code initially
copied from CP67 CCWTRANS channel program translation.

later transition to os/vs2 MVS with multiple virtual address spaces
... had other problems. The os/360 MVT heritage was heavily based on
pointer passing API paradigm ... and with move to MVS. The first pointer
passing ... was put an 8mbyte image of the MVT kernel into every
application virtual address space ... leaving only 8mbytes (out of 16)
for application use. Then because of subsystems were now in their own
(different) virtual address space ... needed a way for passing
parameters & data back and forth between applications and subsystems
using pointer passing API. The result was "common segment" ... a one
mbyte area that also appeared in every virtual address space .... which
could be used to pass arguments/data back&forth between applications and
subsystems (leaving only 7mbytes for applications). The next issue was
demand for common segment was somewhat proportional to number of
concurrent applications and subsystems ... so the common segment area
became common system area (CSA) as requirements exceeded 1mbytes. Into
the 3033 area, larger operations were pushing CSA to 4&5 mbytes and
threatening to push it to 8mbytes ... leaving no space at all for
application (of course with MVS kernel at 8mbytes and CSA at 8mbytes,
there wouldn't be any left for applications ... which drops the demand
for CSA to zero).

Part of the solution to address the OS/360 MVT pointer passing API
problem was included in the original XA architecture (later referred to
811 ... because documents dated Nov1978) were access registers ... and
ability to address/access multiple address spaces. To try and alleviate
the CSA explosion in 3033 time-frame ... a subset of access registers
was retrofitted to 3033 as dual-address space mode ... but it provide
only limited help since it still required updating all the subsystems to
support dual-address space mode (instead of CSA).

trivia: the person responsible for retrofitting dual-access to 3033 ...
later leaves IBM for another vendor and later shows up as one of the
people behind HP Snake and later Itanium.

--
virtualization experience starting Jan1968, online at home since Mar1970

25 Years: How the Web began

hancock4 writes:
We forget that many on-line systems of the 1960s required terse
command strings with position dependent operands. This was
necessary to hold down connect time, CPU time, and memory in
interpreting user requests. That kind of stuff was way too
terse for occasional lay users.

in the mid-90s, we were asked by the largest airline res system at
redoing parts of their system. I went away (with copy of the full
official airline guide) and came back 2months later with an
implementation for ROUTES (finding the flts that got from ptA to ptB,
about 1/4th of system utilization).

at the time, standard reservation terminal interface had changed little
from the 60s, numerous extremely terse character sequences with manual
effort to tie the sequence all together. It ran about 100 times faster
and could handle all airlines for all flts for everybody in the world
... which none of the existing implementations could handle ... with a
lot of additional features (to handle the whole worlds load would run on
ten rs/6000 990s ... a decade later, would fit on cellphone).

Part of the issue, was that the design of existing implementation hadn't
changed since the 60s ... with various kinds of trade-offs made at that
time. Starting from scratch, with 30yrs advances, could make totally
different set of trade-offs. One of the differences, was existing
implementations had trouble where more than two connecting flts were
required. New implementation could handle any number of connecting flts
necessary to get between any two points on the planet.

Also demo'ed a newer user interface ... that collapsed three separate
arcane interactions into single transaction that would return
significantly more information ... which could be sorted by various
criteria. Could even display map of the flt(s), overlayed with current
weather conditions.

25 Years: How the Web began

hancock4 writes:
The long term trend of technology has been to make things simpler
for operators; to improve accuracy and speed.

For instance, the Bell System introduced a new switchboard using
pushbuttons with automated actions, instead of lever keys and manual
action. One objective was to make it easier for occasional operators
to work the switchboard, who wouldn't need as much training in the
nuances that older boards required.

we've had a lot of discussions in the past about operating computers
analogous to driving cars ... now we are getting operator-less cars ...
some silicon valley people have been quoted it is more like riding a
horse ... i.e. the horse handles a lot of the mechanics.

in the late 90s, we periodically stopped by to talk to person
responsible for the largest financial transaction system ... which
happened to still run on mainframes. He would claim that its 100%
availability for several years were due to

this corresponds to Jim Gray's study from the early 80s ... about
service outages had shifted away from hardware failures (as hardware
became much more reliable). ref
http://www.garlic.com/~lynn/grayft84.pdf

Andrew Swallow <am.swallow@btinternet.com> writes:
To win in an invasion situation you frequently need to fight 3:1
So the Allies would have needed an army of 3 * 400 000 = 1 200 000 men
to displace them.

Those 400 000 German soldiers and their weapons could have made a big
difference around Moscow or in France. Using a few parachute jumps and
radio messages to neutralise them was cheap.

from recent on going "misson command" (auftragstaktik) discussions
Logistics and industrial capability wins in state on state wars. On D
Day (6 Jun) the US alone flew over 3,000 sorties the Germans could only
manage 150. The famous debrief of a German anti-tank commander when
captured at Normandy when asked how he came to be captured, his answer
was he ran out of anti-tank shells before the americans ran out of
tanks.

....

Massive overwhelming resources and willingness to fight a war of
attrition (whether or not it is called a strategy) can offset otherwise
significant strategic and tactical shortcomings. One of Roosevelt
stories was reference to Churchill wanting to postpone D-Day another
year or two while Germany&Russia further exhausted each other on the
eastern front (3/4 of german military resources were used against
Russia) ... but Roosevelt didn't believe the American public would stand
for the war continuing into 1947. Current version is how the
military-industrial complex manages American public opinion for a war
that continues forever.

From Guderian's Panzer Leader, loc2902-3:
Hitler then said: 'If I had known that the figures for Russian tank
strength which you gave in your book were in fact the true ones, I
would not--I believe--ever have started this war.'

loc2903-6:
He was referring to my book Achtung! Panzer!, published in 1937, in
which I had estimated Russian tank strength at that time as 10,000;
both the Chief of the Army General Staff, Beck, and the censor had
disagreed with this statement. It had cost me a lot of trouble to get
that figure printed; but I had been able to show that intelligence
reports at the time spoke of 17,000 Russian tanks and that my estimate
was therefore, if anything, a very conservative one.

loc2262-64:
At this time our yearly tank production scarcely amounted to more than
1,000 of all types. In view of our enemies' production figures this
was very small. As far back as 1933 I had visited a single Russian
tank factory which was producing 22 tanks per day of the
Christie-Russki type.

... snip ...

Even John Forster Dulles significant role in rebuilding German economy
and their military industry wasn't able to overcome; John Foster Dulles;
The Brothers: John Foster Dulles, Allen Dulles, and Their Secret World
War, loc865-68:
In mid-1931 a consortium of American banks, eager to safeguard their
investments in Germany, persuaded the German government to accept a
loan of nearly $500 million to prevent default. Foster was their
agent. His ties to the German government tightened after Hitler took
power at the beginning of 1933 and appointed Foster's old friend
Hjalmar Schacht as minister of economics.

loc905-7:
was stunned by his brother's suggestion that Sullivan & Cromwell quit
Germany. Many of his clients with interests there, including not just
banks but corporations like Standard Oil and General Electric, wished
Sullivan & Cromwell to remain active regardless of political
conditions.

loc938-40:
At least one other senior partner at Sullivan & Cromwell, Eustace
Seligman, was equally disturbed. In October 1939, six weeks after the
Nazi invasion of Poland, he took the extraordinary step of sending
Foster a formal memorandum disavowing what his old friend was saying
about Nazism

... snip ...

In 1943, when US Strategic Bombing program was looking for location
and detailed description of German military targets, they got them
from wallstreet (1/3 of US WW2 spending went to strategic bombing).

hancock4 writes:
Another fear of the US and England was that the Soviet
Union would accept a negotiated surrender with Germany,
freeing all German resources to fight on the west. The
invasion had to go on in 1944, ready or not. In hindsight,
it was good that they delayed until then as the Allies
simply weren't sufficiently ady in 1943.

one of the delaying factors for D-day was Churchill had diverted landing
boats to further east ... which US believed as pure side-show. They had
to wait for additional landing boats to be built for the normandy
invasion ... and there still wasn't the necessary resources for
Marseille. The "ready" references are in part US learning to fight
germans in Africa ... but they could have also learned to fight germans
in Europe ... and things would have gone much faster. Part of
Churchill's references was concern for English position in the middle
east and the flow of resources it extracted from the area (somewhat
obfuscating those issues with whether the US was really ready)

this continues after the war, last week was news item that CIA
had disclassified its overthrow of the elected government in Iran
(which was starting to oppose British looting the country) and
reinstated the Shah (which backed British looting the country, in
return for backing his staying in power))
http://www.garlic.com/~lynn/2015b.html#68 Why do we have wars?
http://www.garlic.com/~lynn/2015b.html#70 past of nukes, was Future of support for telephone rotary dial ?

... also US having said that they would invade in 42, then 43, but
delayed until summer 44 ... the Marshall book has Eisonhower wanting to
take Marseille the same time as Normandy invasion ("Anvil") ... but
those resources were diverted further east. As a result Allied
operations almost came to dead stop until they eventually got around to
taking Marseille (port large enough to handle the volume of supplies
required by the Allied effort). recent references
http://www.garlic.com/~lynn/2015b.html#79 past of nukes, was Future of support for telephone rotary dial ?
http://www.garlic.com/~lynn/2015b.html#84 past of nukes, was Future of support for telephone rotary dial ?
http://www.garlic.com/~lynn/2015c.html#54 past of nukes, was Future of support for telephone rotary dial ?
http://www.garlic.com/~lynn/2015c.html#62 past of nukes, was Future of support for telephone rotary dial ?

strategic bombing program (1/3rd of US WW2 resources) was not as
effective as claims before, then or since. Also strategic bombing
command insisted that no long range fighters were needed ... everything
went into building large strategic bombers. British comment is that the
US failed to learn anything from the German bombing of england where
German figther escort was critical. The US strategic bomber command
learned the hard way that long range figthers were needed.

One of other issues was heavy strategic bombers had responsible for Omaha
while medium bombers had responsibility for Utah beach bombing.
"The European Campaign: Its Origins and Conduct" loc2582-85:
The bomber preparation of Omaha Beach was a total failure, and German
defenses on Omaha Beach were intact as American troops came ashore. At
Utah Beach, the bombers were a little more effective because the IXth
Bomber Command was using B-26 medium bombers. Wisely, in preparation for
supporting the invasion, maintenance crews removed Norden bombsights
from the bombers and installed the more effective low-level altitude
sights.54

... snip ...

Previous post that US mounted 3,000 sorties on D-day versus 150 for the
Germans ... that is just raw numbers ... not taking into account how
many of the US 3,000 sorties were effective.

lots of jokes about quality of italian troops in ww2 ... most of the
stories i've seen about battles in italy/sicily were with german
troops in well entrenched positions.

would have better off using resources to take Marseillie early as the
major supply depot ... and then isolating the entrenched german troops
in italy ... rather than frontal assaults ... more like island
skipping scenario for pacific ... ignore isolated islands where the
enemy has no naval or aerial offensive capabilty.

Roosevelt/Marshall have a line about giving priority to European
theater ... that Germany could continue w/o Japan, but Japan couldn't
continue w/o Germany (similar italy couldn't continue w/o Germany).

"General of the Army: George C. Marshall, Soldier and Statesman",
pg440/loc9282-85:
But because of the landing craft shortage and the Anzio landings, the
date of ANVIL had slipped steadily. Anzio stalemated and LSTs ticketed
for OVERLORD still resupplying the beachhead, Eisenhower suggested in
late January that ANVIL be canceled.

pg440/loc9285-87:
Marshall was irritated. The Combined Chiefs had agreed just weeks
before that a seven-division assault in Normandy and a two-division
landing in the south of France were possible on May 31 with the
available reserves of landing craft. Yet once more, it seemed, no
agreement was binding.

pg441/loc9310-12:
The Anzio breakout and the fall of Rome on June 4 settled
nothing. While the Americans saw the capture of Rome as the end of the
Italian campaign, Churchill and Brooke instead argued the German
retreat opened unforeseen opportunities for new attacks toward
Florence and the north.

... snip ...

and "The Wars for Asia, 1911-1949", loc4559-63:
When war began, Russian not British or U.S. armies bore the brunt of
Germany's land forces. In the initial German invasion, Russia faced
142 Axis divisions, 20 percent more than the 119 divisions that
overran France.233 During the Normandy landings, when the United
States and Britain finally conducted a cross-channel invasion of the
European continent, they faced only 58 Axis divisions compared to the
228 divisions then deployed against Russia on the Eastern Front.

... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

"Osmium" <r124c4u102@comcast.net> writes:
So called "high level negotiations" are scary. Look at photos of
Putin, Netanyahu and Obama. Which one would you rather have doing
your negotiations for you? I don't even know the name of the Chinese
guy but I would guess he could be added to the mix without changing
anything. Being belligerent is not a good way to get elected President
of the USA.

There was recent news article on the web that Bush canceled plans to
visit Europe because he is threaten with being arrested as war criminal
over invasion of iraq.

This is followed by news article on the web that Obama campaigned on
extracting us from these continuous wars ... and has yet to do it

The original relative stable regions prior to 9/11, the interventions
last decade have significantly increased anarchy including spreading to
additional regions. 2010 estimates that the bill for two invasions will
hit $5T when long term veterens benefits are taken into account (part of
original selling the invasions were claims it would cost only $50B
... two orders of magnitude less) ... of course the cost of taking care
of veterens are now starting to impact the flow of funds into
military-industrial complex (part of the news about charging VA
executives for maniputling VA performance statistics is obfuscation and
misdirection from the fact that congress enormously underfunded the VA,
which is at the root of the service problems) ... and increased lobbying
pressure on congress to not sacrifice MICC funds for taking care of
veterens.

IBM retirement fund

president of AMEX is in competition to be the next CEO and wins (the
looser leaves taking his protegee and goes to Baltimore, taking over
what has been described as loan sharking business). AMEX is in
competition with KKR for private-equity take-over of RJR and KKR
wins. KKR then runs into trouble with RJR and hires away the AMEX
president to turn around RJR.
https://en.wikipedia.org/wiki/Barbarians_at_the_Gate:_The_Fall_of_RJR_Nabisco

IBM has gone into the red and is in the process of being broken up
into the 13 "baby blues". The board then hires away the former
president of AMEX to reverse the breakup and resurrect IBM. Uses some
of same techniques at IBM that had been used at RJR
http://www.ibmemployee.com/RetirementHeist.shtml

Later the former president of AMEX leaves IBM and becomes head of
another large private-equity company which does LBO (among others) of
company that will employ Snowden:
http://www.investingdaily.com/17693/spies-like-us/
Private contractors like Booz Allen now reportedly garner 70 percent
of the annual $80 billion intelligence budget and supply more than
half of the available manpower. They're not going away any time soon
unless the CIA and NSA want to start over and with some off-the-shelf
laptops, networked by the Geek Squad from Best Buy. Security
clearances used to be a government function too, but are now a profit
center for various private-equity subsidiaries.

... snip ...

especially when they get paid for doing background checks but just
fillout paperwork and skip the checks.

companies in the private-equity mill are under enormous pressure to
generate money every way possible (including the security clearance
companies just doing paperwork and skipping actually doing any
checking). there has been long standing revolving door between gov and
beltway bandits and/or wallstreet ... example is recent CIA director
resigned in disgrace including slap on the wrist for leaking
classified documents ... joins KKR.

The looser for AMEX CEO (and protegee) makes some numberr of other
acquisitions, eventually acquiring CITIBANK in violation of
Glass-Steagall; Greenspan gives them an exemption while they lobby DC
for the repeal of Glass-Steagall (enabling too-big-to-fail ... which
also results in too-big-to-prosecute and too-big-to-jail ... effective
immunity for wide range of illegal activities). The protegee then
leaves and becomes CEO of one of the other (big4) too-big-to-fail.

Welch pushes GE Capital into one of the major institutions responsible
for the financial mess ... and the growth in GE's bottom line.. "Age
of Greed: The Triumph of Finance and the Decline of America, 1970 to
the Present", pg200/loc3925-30:
The CNNMoney writers got it slightly wrong. GE was not exactly like
the American economy. It was even more dependent on financial
services. In the early 2000s, GE was again riding a financial wave,
the subprime mortgage lending boom; it had even bought a subprime
mortgage broker. GE borrowed still more against equity to exploit the
remarkable opportunities, its triple-A rating giving it a major
competitive advantage. By 2008, the central weakness of the Welch
business strategy, its dependence on financial overspeculation, became
ominously clear. GE's profits plunged during the credit crisis and its
stock price fell by 60 percent. GE Capital, the main source of its
success for twenty-five years, now reported enormous losses.

pg324/loc6382-85:
General Electric's persistent earnings increases were a leading
example of how earnings were manipulated to produce consistent
gains. Fortune analyzed how Jack Welch used both pension fund reserves
and reserves at GE Capital to supplement quarterly earnings in order
to make them rise consistently. As noted, they rose every quarter for
almost thirteen years. GE stock roughly tripled between 1990 and 1995
and then quintupled between 1995 and early 2000.

... snip ...

Lots of financial engineering ... like what goes on with current stock
buyback craze

there is the old line about asking crooks why they rob banks ... "that
is where the money is" ... why wallstreet rob pension funds ... "that
is where the money is" ... large part of why they were paying the
rating agencies for "triple-A" on toxic assets ... so they could sell
to pension funds ... and largely responsible for doing $27TRILLION,
2001-2008. Original bloomberg article
Evil Wall Street Exports Boomed With 'Fools' Born to Buy Debt
http://www.bloomberg.com/apps/news?pid=newsarchive&refer=home&sid=a0jln3.CSS6c

disclaimer: Securitized mortgages had been used during the S&L crisis
to obfuscate fraudulent mortgages (posterchild were office bldgs in
Dallas/Ft.Worth area that turned out to be empty lots). In the late
90s, I was asked to try and help avoid the coming economic mess by
looking at improving the integrity of supporting documents as
countermeasures. Then loan originators were securitizing
loans&mortgages and paying for triple-A ratings (when both the sellers
and the rating agencies knew they weren't worth triple-A, from Oct2008
congressional testimony). Triple-A rating trumps supporting
documentation and they can start doing no-documentation liar
loans. Being able to pay for triple-A eliminated any reason for loan
originators to care about borrowers' qualifications or loan quality,
they could sell off (all loans as fast as they could be made) to
customers restricted to dealing in "safe" investments (like large
pension funds, claim is it accounts for 30% loss in funds and
trillions shortfall for pensions)

Russia had 500+ divisions facing 228 AXIS divisions. Rest of allies were
only facing 58 AXIS divisions. US WW2 history is full of difficulty that
Allies had just facing that reduced number. Now, what was it that
Roosevelt going to do??

One nuke outside Moscow and the possibility of more?
After the demonstration in Japan, I don't think even one would
be needed.

Various narratives has Hitler believing that if he had swift advance and
threatened Moscow ... they would sue for peace (he believed Russia
capitulating preventing turning into another Napoleon). However, Stalin
didn't work that way, Russia took horrible damage ... and Stalin just
continued on. There are stories like large numberrs of new russian
recruits being issued wooden rifle replicas (that didn't shoot) and sent
to the front lines ... any attempts at desertion, they would be shot
(sometimes in large numbers).

US officials seeing what Russia suffered at the hands of Germany, had no
doubt about Stalin's resolve. Rest is just somebody's fantasy.

--
virtualization experience starting Jan1968, online at home since Mar1970