from above:
'The bubble in America was caused by some combination of
megalomania, insanity and evil in, I would say, investment banking,
mortgage banking,' Munger, 87, said today at a conference in
Pasadena, California.

... snip ...

There was an "aha" moment during the bubble burst when investors
realized that the rating agencies were selling triple-A ratings (on
toxic CDOs) and wondered whether they could trust any of the
ratings. This resulted in the muni-bond market "freezing".
Buffett/Berkshire then stepped in with muni-bond insurance to unfreeze
the market.

when Buffett made the muni-bond insurance announcement, he said that
it wasn't totally altruistic, that he was planning on making profit
(off investors afraid that they could no longer trust rating agencies,
finding out that they were selling triple-A ratings on toxic CDOs)

pdp8 to PC- have we lost our way?

Peter Flass <Peter_Flass@Yahoo.com> writes:
This really set VM back on its heels. VM was open-source before the
term was invented. Much more than OS/360, though probably not as much
as JES2, many major enhancements to VM were made by users who were
able to access the source.

HASP and then JES2 started using the CP67 incremental source update
process internally (as did VM370). The problem for JES2 was there was an
internal MVS process and there was some contortion to translate the
(CP67/VM370 based) incremental source update into the internal MVS
process for product ship (which didn't include source maintenance for
product/customers).

Recent post about JES2 networking issue ... in this case "pricing" as
application software in wake of 23Jun69 unbundling
http://www.garlic.com/~lynn/2011h.html#62 Do you remember back to June 23, 1969 when IBM unbundled

JES2 inherited its networking support from HASP networking ... which
used spare entries in the (256-entry) psuedo-device table to define
node-ids. The HASP networking code carried the identifier "TUCC" out in
columns 68-71 (customer location where code originated).
http://www.garlic.com/~lynn/submain.html#hasp

For the internal network, the HASP/JES2 networking support had whole set
of problems. First it had intermixed networking fields with job-control
fields ... interconnect between HASP/JES2 at different release levels
could result in crashing HASP/JES2 and bringing down the whole operating
system. HASP/JES2 would throw stuff away if it didn't have the
definition for either the destination or the origin node. Since standard
HASP/JES2 could have 60-80 psuedo-device entries (for psuedo unit record
printer/punch/reader devices), that left only 170-190 entries for
defining network nodes. The internal network had fairly quickly exceeded
200 network nodes (the internal network was larger than the
arpanet/internet from just about the beginning until late '85 or
possibly early '86)
http://www.garlic.com/~lynn/subnetwork.html#internalnet

With hundreds of nodes all over the world, it was literally impossible
to guarantee all nodes operating at the same software release level and
with more nodes than could be defined by JES2, JES2 couldn't be trusted
not to trash large amount of traffic (assuming it didn't crash the
system). As a result, internal JES2 network nodes were restricted to
edge-nodes behind some vm370 node.

VM370 networking grew up with clean separation of components ... so it
was fairly easy to do a NJI driver that talked to JES2 (as alternative
to native vm370 drivers). Internally a whole library of (vm370) NJI
drivers grew up that would translate NJI header information into
general form and then format for specific JES2 release that was on the
other end of link. There was famous case of JES2 system in San Jose at
one level crashing MVS/JES2 systems (at different level) in Hursley
... because the Hursley VM370 didn't have the correct NJI driver
started to keep MVS/JES2 from crashing (and yes, they blamed the
crashes on vm370).

As mentioned in the unbundling reference, when they went to announce the
JES2 networking product ... they couldn't come up with pricing that
resulted in covering the cost of the product. Now VM370 networking had a
much larger customer install forecast and radically lower
development&maint cost ... which initially resulted in "low price" of
$30/month. The eventual "solution" was to announce as a "combined"
product at $600/month ... effectively vm370 networking product
underwriting cost of JES2 networking.

eventually, JES2 expanded maximum number of supported nodes to 999, but
it wasn't until after the internal network was over 1000 nodes (and
later still increased that to 1999 nodes after the internal network had
passed 2000 nodes).

Stephen Wolstenholme <steve@tropheus.demon.co.uk> writes:
That clearly demonstrated that Unix was the first vulnerable OS.
Earlier operating systems, mainly for mainframes, were less
vulnerable.

the xmas card was almost exactly a year earlier on bitnet; although
more of social engineering ... a CMS exec that user had to execute
(after loading from network) ... which did xmas greeting display but
also (re-)sent to all your friends. exploit is similar to various
ploys getting people to download something and execute on their local
machine.

actually was involved in looking at the exposure more than decade
earlier (mid-70s, sending somebody an exec that they were to execute,
which did things that they weren't aware of).

for the fun of it ... old post with example of typical xmas exec
greeting (w/o any hidden stuff) ... this one from 1981, had been
written in rexx and if (recipient ran) on 3279 would blink "lights" in
color (I've attempted to reproduce in html; at the moment, garlic.com
is down for another couple hrs doing some maint)
http://www.garlic.com/~lynn/2007v.html#54 An old fashioned Christmas

Stephen Wolstenholme <steve@tropheus.demon.co.uk> writes:
I don't know how secure 370 operating systems were/are. I worked on
VME when it passed all the military security requirements. That was
why it was widely sold to clients who needed high security. See
https://en.wikipedia.org/wiki/ICL_VME

security enhancements for
details. SFAIK Fujitsu have maintained that level of security since
absorbing ICL.

oh, recent reference to PROFS and iran/contra (congressional subpoena for
emails) ... executive branch wasn't the only gov. operation using PROFs
for email in the early 80s
http://www.garlic.com/~lynn/2011g.html#73 We list every company in the world that has a mainframe computer

VMSG was an internal email client. The PROFS group picked up a very
early 0.xx version of the source and used it for PROFS (wrapped a bunch
of menu stuff around various applications). Later, when the VMSG author
contacted the PROFS group with offer for much enhanced, current version
of the source, the PROFS group attempted to get him fired. The whole
thing quieted down after it was shown that every PROFS email carried the
VMSG author's initials in a non-displayed control field. After that, the
VMSG author only shared the source with two other people.

There was sort of sequence with litigation and the (23jun69)
unbundling announcement and starting to charge for (application)
software (but not kernel software), then the side-track into FS (and
killing off various 370 development) which allowed clone processors to
gain market foothold, then the death of FS and mad rush to get stuff
back into the 370 hardware and software product pipelines.

The clone processor competition contributed to decision to start
charging for kernel software which went thru several stages ... until
all kernel software was being charged for. After transition to
charging for all kernel software ... there then was announcement for
transition to OCO and the OCO-wars (i.e. object code only). This was
especially traumatic for vm370/cms customers since it had history
(dating back to cp67) of providing software maintenance in source code
(with a process of incremental source updates that had been developed
for cp67).

I had continued doing 360/370 stuff through the FS period (periodic
ridiculing FS activities ... even claiming some of the stuff I had
running was better than their vaporware descriptions; possibly not the
best career enhancing). Recent discussion/thread (IBM Mainframe 1980's
on You tube) in customer ibm-main mailing list
http://www.garlic.com/~lynn/2011h.html#70and in a.f.c. newsgroup
http://www.garlic.com/~lynn/2011h.html#75

--
virtualization experience starting Jan1968, online at home since Mar1970

from above:
Most call it one of the biggest financial crises in living
memory. Others call it one great big Ponzi scheme. Whatever you want
to call it, a bunch of people lost a bunch of money and the world of
high finance may never be the same. But don't worry --
that doesn't mean that we've fixed all these problems
or punished the people responsible. It just means that next time you
can't get a loan or a higher credit limit, the banks will have
an excuse.

At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened

From: lynn@garlic.com (Lynn Wheeler)
Date: 03 Jul, 2011
Subject: At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
Blog: Mainframe Experts

various of the operations that operate mega-datcenters with hundreds
of thousands of blades have published various articles about their
selections ... detailed studies of commodity priced parts for MTBF,
life-expectancy, price/performance, energy consumption, etc. There are
some claims that they managed to build much better infrastructure with
carefully selected commodity priced components & at 1/3rd the cost of
off-the-shelf "branded" blades. availability is then built into the
infrastructure fabric (as opposed to any specific blade; aka build the
best MTBF for individual blades out of commodity parts and then gain
additional availability by having enormous numbers of replicated
blades ... followed by replicated medga-datacenters). Virtualization
is also playing larger & larger role in availability technologies for
these operations

you don't see mainframe. If anything, it seems like IBM is trying to
revitalize mainframe by infusing it with technologies from other
environments. Gov. labs. I remember in the 90s moving off mainframes
they were perfectly happy with ... because their existing support
staff was retiring and they couldn't find replacements (visiting labs
when announcements were made and listening about having open positions
for over a year that they were unable to backfill). Part of the issue
was Y2K was then happening same time as internet bubble ... internet
startups were offering enormous compensation and financial industry
was outbidding nearly everybody else for dwindling mainframe skills.

there tends to be some overlap between high-density blade servers,
datacenter "containers", power and cooling efficiencies, etc (with
advances in particular areas helping drive advances in other
areas. The evolution of the mega-datacenters having helped drive all
areas (hundreds of thousands of processors with millions of
cores). Potentially some of these individual mega-datacenters may have
more computing capacity than the aggregate of all currently installed
mainframes. It then would be natural for mainframes to start to look
more & more like such operations (trivial example archaic, obsolete
CKD DASD are now all simulated on real FBA disks).

Vector processors on the 3090

rkuebbin@TSYS.COM (Kirk Talman) writes:
If this is the beast I think it is, it attached only to 360s as a channel
that had outboard channels. Memory (no bit correction) says that was 44,
65, 75, 91, and 165/8 on 370. May be more. The "programs" were channel
programs. I was told that this was the reason the 44 was created. And
that it was 65 + lobotomy.

from above:
Although the Model 44 processing units is about the same in physical size
(Figure 1) as that of its nearest neighbor, the Model 50, its
performance on problems for which it is optimized is 30 to 60 per cent
faster than that of Model 50.

... snip ...

and:
Processor storage speed for the Model 44 is 1 microsecond. Four bytes
(one word or two halfwords) are stored or fetched in each
access. Processor stroage, alwasy housed within the CPU, is availabe in
the four capacities shown at the top of Figure 3.

Data paths throughout the CPU are one word wide.

... snip ...

functional characteristic for 360/40 (also on bitsaver) has two-byte
datapaths

CBS had segment circa 2002 or 2003 that one of the GSEs had more
lobbyists than employees ... everybody that had ever been congressman,
congressional staffer and/or in anyway related to congress were
offered lobbying retainer. That was somewhat motivation for proposed
legislation two yrs ago to make it illegal for GSEs to lobby. However,
toxic CDOs held by just four largest too-big-to-fail institutions
totally dwarfed GSEs (& TARP funds). It was so large that could only
be handled behind the scenes by Federal Reserve.

Peter Flass <Peter_Flass@Yahoo.com> writes:
This is caused in large part by the huge malpractice settlements
juries have awarded. Republicans have been trying to reform this for
years, but the doctors' lobby is a major contributor to the Democrats,
so they haven't been able to make much headway.

medicaid is done by states but 50% funded by federal gov. Common
estimates are that medicaid bills are inflated avg. 20-30% by providers
(separate from fraud issues). In the past couple years, federal gov. has
offerred to increase funding to 60% (i.e. 20% of states contribution) if
the states aligned their billing and fraud reviews to federal
standards. In some number of states, the lobbying by providers
(services, appliances &/or equipment) have resulted in defeating
legislation to conform with federal guidelines (not in anybody's
interest except those providers).

it seems that part of this was the past couple years that many states
used the federal stimulus funds to cover their part of medicaid funding.
The overbilling and outright fraud is significantly more prevalent in
some states.

--
virtualization experience starting Jan1968, online at home since Mar1970

big reason for triple-A rated toxic CDOs was open up mortgages to much
larger market (than GSEs and w/o the qualifications & restrictions
required by GSEs) .. and the estimated $27T in triple-A toxic CDO
transactions (done during the bubble) had enormous fees & commissions
to wall street (that didn't exist with straight GSE mortgage
purchases). Estimate that wallstreet tripled in size (as percent of
GDP) during bubble and wallsteet bonuses spiked over 400% during
bubble.
Evil Wall Street Exports Boomed With 'Fools' Born to Buy Debt
http://www.bloomberg.com/apps/news?pid=newsarchive&refer=home&sid=a0jln3.CSS6c

Bank of America Fights to Deflect Fraud Investigations
http://www.thewire.com/business/2011/06/bank-america-fraud-investigation-hindered/38793/
The battle is intensifying to hold Bank of America accountable for
faulty foreclosures that may have scammed taxpayers out of billions is
intensifying on a state level. In a lawsuit filed by the state of
Arizona against the nation's largest lender, a federal auditor says
that Bank of America "significantly hindered" a review of its
foreclosure practices on loans insured by the Federal Housing
Administration.

60mins had segment on medicare part-d legislation ... focusing on 18
congressmen&staffers that were instrumental in shepherding the bill
through. Right at the end, they insert one liner that eliminates
competitive bidding and they block distribution of CBO report that takes
into account that change. 60mins then show side-by-side identical drugs,
from veterans administration and medicare part-d ... the VA (allowing
competitive bidding) drugs are 1/3rd the price of the same drugs under
medicare part-d. they also found that all 18 (that sheperded the bill
thru) within 6-12 months had resigned and were on drug company payrolls.

the comptroller general was on program that medicare part-d comes to
represent $40T unfunded mandate ... totally swamping all other budget
items ... and the bill was passed shortly after congress let the fiscal
responsibility act expire.

in addition, the comptroller general also pointed out that the massive
tax cuts also occured shortly after fiscal responsibility act expired
(big decreases in revenue coupled with big increases in liabilities &
spending)

... $27T in triple-A rated toxic CDOs transactions was to open
mortgage market to other customers. Buying triple-A rating also
enabled no-documentation mortgages (eliminated need for most
supporting documentation). It also created enormous (new) fees and
commissions in many quarters.

--
virtualization experience starting Jan1968, online at home since Mar1970

Got to remembering... the really old geeks (like me) cut their teeth on Unit Record

box/tray of cards &/or coffee cups on top of 1403N1 printer ... cover
would automatically lift when ran out of paper (see above post on
porting 1401 MPIO to 360/30 in assembler ... eventually box of
cards). carton of cards would have been five boxes.

magic marker diagonal line across top of deck of cards from one corner
of the top to the opposite corner. new cards added &/or replaced would
be unmarked ... so periodically remark the deck.

originally when CP67 was installed at the univ (jan68), the source was
on os/360 and assembled there. assembly of each source module would
produce a "TXT" (binary/executable) deck. They would be arranged in
card tray. Rebuild the kernel required ipl/boot the card deck. (loaded
into memory, it would write the memory image to disk for IPL).

Each "TXT" deck would have diagonal line plus the name of the module
(with magic marker). Update the source image on os/360, re-assemble,
take new TXT deck, mark it across the top and replace the
corresponding deck in the card tray, reboot the whole card tray, and
then reboot the system from disk.

As CMS became more reliable, the source was moved from OS/360 to CMS
(running in virtual machine). Each individual source image could be
assembled on CMS producing a TXT file. A CMS exec would "punch" the
"TXT" files to the virtual punch which was setup to "transfer" to the
virtual reader (rather than going to "real" punch). The virtual reader
could be IPL'ed and the virtual memory image written to the real disk
(which could then be "real" IPLed). The process continued with vm370
for another couple decades.

The source routines were sequenced by 100 and the CMS editor could
resequence each file after editing (new, changed, deleted card
images). Each source file had assembler "ISEQ 73,80" at the start of
the file (instructions to the assembler to check sequential numbers in
cols. 73-80 ... left over from real card days).

A incremental source change process was then started. Rather than edit
the source file and resequence the whole file, a "update" file was
edited ... which specified changes to the original source based on
sequence numbers (insert, replace, delete). The CMS UPDATE program
would apply an update file to the original source, producing a
temporary source file that was assembled ... producing a TXT deck. If
you were inserting more cards than sequence could provide for, you
replaced adacent cards that were renumbered to gain the additional
sequence numbers. Originally, the edit update process required person
manually type the sequence numbers in the new/replaced cards. Early
on, I wrote a preprocessor to the update program that would manually
put in appropriate sequence numbers before invoking the UPDATE
program.

Then a process was created that could apply multiple different
incremental updates sequentially ... producing the temporary source
file. CP67 and then VM370 customers were use to maintenance being
shipped in incremental update source form ... so when OCO
(object-code-only) was eventually announced in the early 80s, it was
especially traumatic. Recent post mentioning OCO-wars from a
(linkedin) IBMers thread: "Do you remember back to June 23, 1969 when
IBM unbundled"
http://www.garlic.com/~lynn/2011i.html#7

Morten Reistad <first@last.name> writes:
Then the US, under Clinton, just removed most of this legislation,
and went back to how this was pre-1936. This happened in 1997, with
most legislation becoming effective for the banks in 1999.

there was some interesting politics between the legislative branch and
the executive branch at the time with different parties. The draft
legislation was initially passed with majority (opposing party holding
majority in the legislative branch) but not enough to overturn veto (the
executive branch was prepared to veto GLBA). GLBA then underwent some
amount of fine-tuning
https://en.wikipedia.org/wiki/Gramm-Leach-Bliley_Act

On the floor of congress, the rhetoric about primary purpose of GLBA
was that if you were already a (regulated) bank, you got to remain a
bank, but if you weren't already a bank, you didn't get to become one
(specifically calling out walmart & m'soft wouldn't get to become
banks). Later TARP was passed to bail out the financial crash ... but
the amount appropriated would hardly make a dent in the problem. So
the Federal Reserve aggresively stepped-in behind the scenes to try
and make the too-big-to-fail whole again. Problem was that Federal
Reserve could only provide some amount of help for regulated chartered
banks ... and some of the too-big-to-fail institutions weren't
regulated chartered banks ... so Federal Reserve just gave them bank
charters (which would theoritically have been precluded by GLBA).

Morten Reistad <first@last.name> writes:
The isolation and insurance clauses made for some cumbersome banking
transactions; and there was consensus that the whole system needed an
overhaul to accommodate more modern financial system organisation,
like CDOs, options strategies etc. Denmark always had a CDO like
structure, and NL adopted one in the 1980s; fully within the
compartmentalisation system as separate loan entities with full
bookkeeping.

CDOs had been used during the S&L crisis ... with fraudulent supporting
documents ... the scams didn't find a large market ... in part because
they lacked a credible rating. Around 2000, we were asked to look at
methods for improving the integrity and trust in CDO supporting
documents.

There was congressional hearings in the fall 2008 into the role that the
rating agencies played in the financial bubble&collapse with testimony
that rating agencies were selling triple-A ratings for toxic CDOs. One
of the things that the triple-A ratings enabled was loan orginators
could do no-documentation loans (since investors would look at the toxic
CDO triple-A rating and not look any further). Eliminating need for
supporting documents then also eliminated any issue about improving
their trust and integrity.

Now, Commodities Futures Trading Modernization act also enabled ENRON
(as per recent references to both Mr. & Mrs.). In the wake of
ENRON, congress passed Sarbanes-Oxley ... which is mostly noted for
enormous increase (big costs) in audit requirements ... but still
required SEC to be doing something. However, in the congressional
hearings into Madoff by the person that had tried unsuccessfully for a
decade to get congress to do something about Madoff, SEC didn't appear
to be doing much. Also, possibly because GAO didn't think that SEC was
doing anything, GAO was doing reports of public company financial
filings that showed uptic in fraudulent filings ... despite all the
additional Sarbanes-Oxley audits (recent item from web: Enron was a
dry run and it worked so well it has become institutionalized). A
"snide" multiple-choice: 1) SOX had no effect on fraudulent filings,
2) SOX motivated the increase in fraudulent filings, 3) if it wasn't
for SOX, all public company financial filings would be fraudulent.

It turns out that another provision in Sarbanes-Oxley was SEC to do
something about rating agencies.

in 2008 there was TV news segment with roundtable of economists at some
conference ... they made case for flat-tax

1) exemptions are major source of lobbying as well as fraud and
corruption in our legislative process (i.e. high rate with exemptions
producing very low effective rate creates environment that encourages
corporations to spend large amounts on congress ... significantly
contributing to observation that congress is the most corrupt
institution on earth)

2) exemptions result in 65,000+ page tax code ... flat-tax would change
that to 400 or so page tax code ... change could improve GDP
productivity by possibly six percent (lost resources dealing with
complexity of current code, non-optimial business decisions influenced
by tax code provisions). improved productivity more than offsets any
lost benefits with elimination of specific exemptions (costs dealing
with the number of exemptions more than offsets the benefit of
examptions ... other than financial benefit to members of congress).

Anne & Lynn Wheeler <lynn@garlic.com> writes:
CDOs had been used during the S&L crisis ... with fraudulent
supporting documents ... the scams didn't find a large market ... in
part because they lacked a credible rating. Around 2000, we were asked
to look at methods for improving the integrity and trust in CDO
supporting documents.

note that the S&L crisis was change in S&L regulation ... one was
cutting the reserves in half (presidential directed "economic stimulus")
and another was drastically reducing S&L regulation and oversight.

The reduction in S&L regulation and oversight resulted in nearly anybody
could buy an S&L and then make all sorts of personal loans
... drastically opening for conflict of interests (lots of loans to bank
owners and officials at little or no interest ... which might default
with no adverse actions). From "Two Million Dollar Meltdown":
... it was possible for a single individual to take control of an S&L,
then organize and lend to multiple subsidiaries -- for land acquisition,
construction, building management, and the like -- and create his own
small real estate empire entirely with depositors' money.

There were lots of snide remarks in the wake of the S&L crisis about
qualifications for being an S&L official in a heavy regulated and stable
industry. With the S&L de-regulation and cutting reserves in half, there
were all these officials that found themselves in very unfamiliar
waters. One of the issues was decisions on how to invest all the
recently released reserves ... among other things, it turns out that
they were sitting ducks for the investment bankers that swooped in with
junk bonds.

There is industry folklore about the S&L regulator refused to do what
the president asked. That S&L regulator was asked to resign and was
replaced by appointee that did do as directed ... he is frequently named
as major factor in S&L crisis (doing what the president asked). He is
cited in "Two Million Dollar Meltdown".

in the 90s, a lot of it was a congress (dominated by the other party)
that drastically reduced spending and somewhat conformed to the fiscal
responsibility act ... i.e. tax collections and spending/obligations had
to be balanced

then in this century ... with the same party in control of congress,
things seemed to go completely wild after the fiscal responsibility act
expired ... reducing tax collections w/o corresponding reduction in
spending along with increases in spending w/o corresponding increase in
tax collection.

the president and the party in control of congress both significantly
contributed to balanace budget & surplus end of last century.

but it is almost like dr. Jaekel and mr. hyde ... a couple years later,
nearly that same congress manages to destroy the budget (and nearly
destroy the economy).

it seems like (possibly subset) of congress managed to push thru
fiscal responsibility act and got congress to mostly toe the line
... then when the act expires ... they just all go crazy (somebody on
tv business news just now used the reference like "looney toons"
... specifically with respect to economic bubble and our leveraged
mortgage market with respect to the rest of the world).

--
virtualization experience starting Jan1968, online at home since Mar1970

Happy 100th Birthday, IBM!

Anne & Lynn Wheeler <lynn@garlic.com> writes:
... it was possible for a single individual to take control of an S&L,
then organize and lend to multiple subsidiaries -- for land
acquisition, construction, building management, and the like -- and
create his own small real estate empire entirely with depositors'
money.

a little more S&L crisis from "two million dollar meltdown"
(people buying S&Ls for their own personal piggy bank):
Another owner with a $1.8 billion loan book had bought six Learjets
before the Feds noticed that 96 percent of his loans were delinquent.

... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

sidd <sidd@situ.com> writes:
Anyone go to jail ? I seem to recall the Finnish banker did get a six
month suspended sentence, and a fine, most of which he squirmed out of
paying.

I doubt Mozillo or Cassano, or the ratings mafia or any of the rest of
the usual suspects in the USA will ever see a minute of jail time.

Regulatory capture is a wonderful thing if you are the captor. For the
rest of us, not so much.

"The fruitless years behind us, the hopeless years before us," as
Kipling said once.

there was article in the last decade or two that jail time tends to be
less & less likely as the severity of the crime increases ... until it
reaches a level of national importance and the responsible parties can
actually be rewarded.

the possible exception recently is Madoff ... although he turned
himself in ... he wasn't caught. During the congressional Madoff
hearings, the person testified about trying unsuccessfully for a decade
to get SEC to do something about Madoff, was extremely publicity
shy. He eventually had a spokesman for some TV interviews ... who made
some veiled references to concerns regarding powerful/violent/criminal
organizations might be behind Madoff's apparent immunity (and they
wouldn't take too kindly to the collapse of the ponzi scheme).

A year after, the person was on book tour and asked about his earlier
reluctance to appear in public; response was that he had changed his
mind ... and the possible reason for Madoff turning himself in was
that Madoff may have con'ed some powerful/violent/criminal
organizations and needed protection (as opposed to having been backed
by such operations).

That then leaves unexplained Madoff apparent immunity from any SEC
action (i.e. no longer explanation that SEC was in the pocket of
operations that were also backing Madoff).

Web version of mainframes

jagadishanp@GMAIL.COM (jagadishan perumal) writes:
Just wanted to know whether a web version of mainframe can be implemented.
One of our user is trying to access from a remote location using a wireless
internet in which the IP changes everytime.

I take it somewhat as threat analysis ... what are all the things that
can happen and corresponding countermeasures vis-a-vis vulnerabiilty
analsys ... which can be what are you dependencies.

I've periodic done business critical dataprocessing and have
frequently observed that to take a well written application and turn
it into a service can take 4-10 times the original effort.

we had been brought in to consult by a small client/server company
that wanted to do payment transactions on their server; the startup
had also invented this technology they called "SSL" that they wanted
to use (the result is now frequently called "electronic
commerce"). Part of the implementation was something called the
"payment gateway" (interface between servers on the internet and
payment networks). They were doing a well written & tested server
application to interface to the gateway. Rather than look at all
possible ways it might be attacked (and necessary countermeasures)
... I drew up list of approx. 40 critical components that could either
fail &/or be attacked. In a matrix of the critical components and
half-dozen states ... I required a demonstration (that in each
possible case), it was possible to perform a recovery action
(regardless of problem cause) and/or at least demonstrate 1st level
problem determination within five minutes elapsed time. To go from
their "application" implementation to "server" implementation resulted
in possibly only doubling amount of code, but closer to five times the
original effort. "Recovery" was all the way through OODA-loop to
act. "First level problem determination" (within five minutes) was at
least sufficient instrumentation & analysis to get at least through
"OO" part (even if it couldn't get all the way through the
loop). Instrumentation can frequently be sufficient to make the
difference between "chaotic" and "complicated".

I didn't care whether failure/fault was purposeful attack,
hardware/software glitch or other event.

It turns out that I didn't have the same level of authority over the
client-side implementation; I could give presentations and
recommendations. In the client-side implementation, instead of having
some stuff out-of-the-box ... it took a year elapsed time to get a few
things added/changed. I claimed it was bunch of young kids, fresh out
of school ... and if they hadn't seen it in some class or text-book
... they would claim "too complicated" (even when I demonstrated
example implementations in production use for nearly a decade).

It is somewhat different from studying "how things fail" ... but more
like, "if something fails, what are the consequences" (and enumerating
them).

jmfbahciv <See.above@aol.com> writes:
Part of the "savings" was the decision to use the National Guard
for major military actions. Clinton administration restructured
the military. One talk I saw on CSPAN mentioned that the
assumption there would be one, and only one, hotspot was the basis
of the restructuring.

another part of "balancing" (for some value of *balance* at some point
in future) was mandating conversion to digital TV ... this was height of
internet bubble and auctioning off (wireless) spectrum. digital tv used
much less bandwidth than analog tv; cooking the books assumed that the
freed up bandwidth would then be auctioned off at extremely high
valuation ... which plugged the remaining gap for achieving *balance*
(this was another congressional special).

but as previously mentioned, that congress seemed to be very much
dr. Jaekel and mr. hyde ... mostly the same congress a couple years
later (early part of this century) went wild after fiscal responsibility act expired in 2002.

jmfbahciv <See.above@aol.com> writes:
And you might as well double or triple that when Obamacare kicks
in. My mother is getting a bill for $190 for the mandated
annual checkup (which she didn't have) but Medicare isn't covering it.

I haven't seen any reference that latest round will drive up health
care costs anywhere near the one liner in medicare part-D.

18 members/staffers in congress from party in power (shortly after
fiscal responsibility act expires in 2002) at last minute add one-line
that precludes competitive bidding ... and prevents distribution of CBO
report (on the effect of the change) before the vote. CBS 60min segment
then finds that shortly after passage, all 18 have resigned and on drug
company payroll. The news segment also compares prices of identical
drugs under medicare part-d (w/o competitive bidding) and veterans
affiars (w/competitive bidding) and find VA gets identical drugs at
1/3rd the price.

from above:
According to the analysis of the Project on Defense Alternatives,
between 1998 and 2010 Congress appropriated to the Pentagon $2.144
Trillion (with a "T") more than was anticipated by the 1999 "baseline."
Of that amount, $1.113 Trillion was spent on the wars in Iraq and
Afghanistan, and $1.031 Trillion was added to "base" (non-war) Pentagon
spending. (See p. 3 of PDA's study, "An Undisciplined Defense:
Understanding the $2 Trillion Surge in US Defense Spending" at

I basically concur with PDA's numbers, which are from DOD and OMB budget
data as described on p. 61.)

What did you get for that extra $1 Trillion? Basically, you got a
smaller Navy and Air Force and a tiny increase in the size of the Army.
As an extra bonus, the hardware those forces use are now older than they
were in the Clinton administration in 1998.

... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

Dave Garland <dave.garland@wizinfo.com> writes:
That ain't Obamacare, billing for services not performed is fraud.
And if she didn't have a checkup, it's quite right for Medicare to
refuse to pay for it.

at least in medicaid ... federal has had big push to correct fraud and
inflated billing in the state accounting practices (estimated as much as
30%) ... including both stick & carrot (with carrot being increase
federal share of medicaid from 50% to 60% ... if a state would align
their practices with federal guidelines). I've looked at state
legislations defeating bills to align state practices with federal
guidelines ... apparently because of heavy lobbying by various
providers.

there were news items of state attorney general in state medicaid fraud
offices resigning in protest over the issue.

--
virtualization experience starting Jan1968, online at home since Mar1970

it possibly happens in every piece of legislation ... one of the reasons
that there have been long-time references to congress is the most
corrupt institution on earth.

however, at some point, something in the abstract becomes more
serious. GAO comptroller general claimed that medicare part-D becomes a
$40T unfunded mandate totally swamping all other items ... then that one
line represents 2/3rds of that $40T ($27T).

the other part from the GAO comptroller general was that such budget
activities became especially egregious in the period after the fiscal
responsibility act expired in 2002. This became the turning/inflection
point from balanced budget & surpluses in the late 90s to the enormous
deficits that continue up to current period.

this pretty much shows that the (health care) problem becomes dire (with
few, if anybody actually denying there is a problem)

while the period after the fiscal responsibility act expiring in 2002
made no effort to demonstrate deficit neutral (congress having gone from
Dr. Jaekel "budget surplus" to Mr. Hyde "extreme deficit" in a few
short years).

the quantity/quality problem also has analogy in the bubble. There has
always been hot beds of fraud and corruption in various parts of the
financial industry ... tending to millions or few billions. However, the
period of significant deregulation (& lack of enforcement of remaining
regulation) allowed the hot beds of fraud and corruption to combine
together into financial firestorm that had potential of taking down
economy and country ... with fraud and corruption reaching trillions.

Note that this was starting to become pervasive at the same time Boyd
would mention (in briefings) about big problem in American companies
were the young WW2 officers coming into their own as corporate
executives and wanting to emulate their early WW2 training with large
infrastructure with rigid, top-down command and control structures

remember in the mid-80s, executives where pitching that the company
would double (from $60B to $120B) mostly on mainframe sales. As part
of that there was huge manufacturing building program (to double
mainframe manufacturing). Possibly as part of doubling prediction,
there also appeared to be huge uptick in "fast-track" ... turning out
large number of 90-day wonder executives. This is when mainframe was
on down tick and company was heading into the red a few yrs later (it
wasn't necessarily career enhancing to pointing out that the company
wouldn't be doubling).

From Ferguson and Morris book on IBM there were references that IBM
lived under dark shadow of the failure for decades, also reference to
the old culture under Watsons was replaced with sycophancy and make
no waves under Opel and Akers.

and this reference from Ferguson & Morris:
Most corrosive of all, the old IBM candor died with F/S. Top
management, particularly Opel, reacted defensively as F/S headed
toward a debacle. The IBM culture that Watson had built was a harsh
one, but it encouraged dissent and open controversy. But because of
the heavy investment of face by the top management, F/S took years to
kill, although its wrongheadedness was obvious from the very
outset. "For the first time, during F/S, outspoken criticism became
politically dangerous," recalls a former top executive.

Also in the late 80s, a senior disk engineer got a talk scheduled at
annual, worldwide internal communication group conference. He opened
the talk with statement that communication group would be responsible
for the demise of the disk division. The issue was the stranglehold
that the communication group (products) had on datacenters. The disk
division was seeing leading wave of data fleeing the datacenters (&
mainframes to more distributed computing friendly platforms) and came
up with some number of products to address the situation. However,
communication group was protecting their turf and owned strategic
responsibility for everything that crossed datacenter walls.

there is folklore about how 1993 accounting slight-of-hand resulted in
the largest executive bonuses paid in the history of the corporation
(just before the change of guard).

Services tend to be people intensive .... doubling services tend to
require doubling people ... not a lot of leverage. In competitive
market, there is tendency to compete on price and commoditize the
product; in services people effectively become the product and what
gets commoditized.

However, going on in parallel in the period after the failure of FS
and the dark shadow that cast over the company for decades (and the
significant culture change) ... there is this item:

The MBA problem appears to be with people that view themselves purely
as MBA (business schools stamping out newly minted MBAs). In the 80s,
the VC holy grail in silicon valley for startups was engineer graduate
that had worked for some number of years and then went back (possibly
at night school) and got MBA. It was critical to have engineering view
but was also beneficial to have additional viewpoints (like a MBA
... or in the OS2 case, input from customer).

I had sponsored Boyd's briefings at IBM in the 80s ... and one of the
major points was constantly viewing all facets of a issue. misc.
Boyd-related posts & references
http://www.garlic.com/~lynn/subboyd.html

Note in the 1990 timeframe, the US auto industry had C4 taskforce,
large part was to leverage technology to deal with foreign
competition. Some number of technology companies were
participating. Industry could clearly outline competitive advantages
and necessary changes to compete ... but obviously things were so
ingrained that at the time, they weren't actually possible to change.

One of the details was foreign competition cut in half the cycle time
for completely new model from 7-8 yrs and were in the process of
cutting it in half again. As a result foreign competition was in much
better position to take advantage of new technology and/or change in
market preferences. Also on 7-8yr cycle, there were several cases
where parts specified in original design were no longer available and
there was non-productive design scrap and rework.

Offline from the meetings, I would chide mainframe brethren (who were
on similar cycle), how could they expect to contribute (other than
trying to push their mainframe as part of any technology solution)?

One of the things that the big consulting houses have demonstrated for
decades was reducing various things to best practices & formulas (way
of leveraging small experience & skill base) ... and then hiring
hordes of new college graduates each year and training them in the
formulas. Whole projects would be staffed by this formula trainees
... things would run ok as long as project conformed to the formula
... but could go drastically wrong if things strayed from the formula.

note that my posts early in the discussion about FS failure noted that
declaring failure would result in loss-of-face by executives
... especially Opel ... but also Akers ... which kept it going long
after it should have otherwise been terminated (aka not just
organizational financial "loss aversion" ... but enormous executive
image loss). I've noted in the past that it probably wasn't career
enhancing to have ridiculed FS activities while it was going on
(drawing analogies between FS and a long-running cult film that had
been playing continuously down in central sq for several years).

in the beltway bandit scenario ... they would actually proclaim
something a failure ... and then beltway bandits take turns doing the
next, new&improved version, with the cycle repeating itself several
times (several failures, each followed by the next new effort). In
some cases, billions of dollars down the rathole on each iteration.

another scenario ... early in 3yr, tens-of-millions/yr contract
... showing that the approach wouldn't work and being told that they
might consider a correct approach on the next contract ... but there
was significant money left on the table in the current contract (there
was little downside performing exactly as specified even tho everybody
knew it would fail ... and then have opportunity to try again in the
follow-on contract).

Peter Flass <Peter_Flass@Yahoo.com> writes:
Nope, but we certainly disagree about how to fix it. Try to reduce
costs. Eliminating enormous malpractice awards would save how much?
Think about both the enormous cost of malpractice insurance that's
driving doctors in some specialties right out of business. Then think
about all the unnecessary tests and prescriptions that are primarily
"defensive medicine."

there has been various observations that (any) congress is the least
likely to do something about malpractice cases since they are
predominately lawyers and heavily influenced by trial lawyer lobbying.

there are (at least) two sides to unnecessary tests & prescriptions
... the defensive medicine scenario ... but both medicaid & medicare
have worked hard to minimize unnecessary tests & prescriptions that
appear to be much more associated with over-billing (significant larger
financial motivation) than true defensive medicine (other than anecdotal
references and misdirection).

medicare & medicaid have gathered large amount of national data and
worked hard to establish best practices and put operations under close
scrutiny that deviate significantly. at the moment auditing is extremely
expensive and time-consuming and as a result, penalties are especially
onerous (acting as deterrent). new guidelines include processes that
attempt to proactive preclude worst of the processes (being cheaper than
catching and prosecuting after the fact).

however there is enormous lobbying to not cut revenue for various
interested parties (as epitomized by the one-line, non-compete clause
which is around a $27T present to the drug companies, aka 2/3rds of the
$40T medicare part-d unfunded mandate)

above mentions "coders" new health-care profession ... being able to
specify code that results in maximum possible re-imbursement but keeping
away from failing an audit and enormous penalties. above also references
study that hospitals with the best IT have 30% better patient care.

from above:
The most significant change in health policy since Medicare and
Medicaid's passage in 1965 went virtually unnoticed by the general
public. Nevertheless, the change was nothing short of revolutionary. For
the first time, the federal government gained the upper hand in its
financial relationship with the hospital industry. Medicare's new
prospective payment system with DRGs triggered a shift in the balance of
political and economic power between the providers of medical care
(hospitals and physicians) and those who paid for it - power that
providers had successfully accumulated for more than half a century.

... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

Happy 100th Birthday, IBM!

Anne & Lynn Wheeler <lynn@garlic.com> writes:
while the period after the fiscal responsibility act expiring in 2002
made no effort to demonstrate deficit neutral (congress having gone
from Dr. Jakyll "budget surplus" to Mr. Hyde "extreme deficit" in a
few short years).

cambridge science center on 4th flr of 545 tech sq did (virtual
machine) cp67/cms.Science center also took interpreter from apl360 and
did cms\apl. It opened up workspace size to virtual address space
instead the 16kbyte to 32kbyte typical of workspaces in apl\360. Also
interfaces to cms system services was added to cms\apl; a combination
of significantly larger workspace sizes and system services API
enabled real-world applications. APL storage management had also to be
extensively reworked for large virtual memory, demand page
environment. For instance, business planners in armonk hdqtrs loaded
the most valuable of corporate resources on cambridge cp67 system
(detailed customer information) for all sorts of business modeling and
forecasting (done remotely from armonk via 2741 dialup
service). Cambridge took a lot of heat about the system services api
in cms\apl ... until eventual "shared variable" paradigm was created
and used for interfacing to system services. some old pictures
... including 2741 "APL" type ball
http://www.garlic.com/~lynn/lhwemail.html#oldpicts

later the palo alto science center morphed cms\apl into apl\cms for
vm370/cms (as well as the 370/145 apl microcode assist).

The period was also significant reduction in regulation and/or failure
to enforce regulations that allowed isolated hot-spots of fraud and
corruption to combine into economic firestorm ... nearly taking down
economy and country.

--
virtualization experience starting Jan1968, online at home since Mar1970

and found (in above):
"An inevitable consequence of the last Congress's decision to ramp up
spending so quickly was that billions of Americans' hard-earned tax
dollars were squandered. The Government Accountability Office (GAO) --
the non-partisan agency that audits the government's books -- recently
found between $100 billion to $200 billion in duplication, overlap, and
waste in federal spending."

Anne & Lynn Wheeler <lynn@garlic.com> writes:
Early spring 2009, I was contacted about HTML'ing the Pecora hearings
(senate hearings leading up to Glass-Steagall, had been scanned at the
Boston Public Library the previous fall) with extensive indexing,
croos-references and URLs between what happened then and what happened
this time (some anticipation that the new congress might have some
appetite to do something). After working on it for a couple months,
got a call saying it wouldn't be needed after all (implication that
new congress was being heavily lobbied by financial community).

--
virtualization experience starting Jan1968, online at home since Mar1970

At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened

From: lynn@garlic.com (Lynn Wheeler)
Date: 12 Jul, 2011
Subject: At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
Blog: Mainframe Experts

Happy 100th Birthday, IBM!

Anne & Lynn Wheeler <lynn@garlic.com> writes:
the other part from the GAO comptroller general was that such budget
activities became especially egregious in the period after the fiscal responsibility act expired in 2002. This became the turning/inflection
point from balanced budget & surpluses in the late 90s to the enormous
deficits that continue up to current period.

from the article:
The industry paid lobbyists $1.3 billion in 2009 and through the first
three months of 2010, according to the Center for Public Integrity,
which added up the spending by the 850 businesses and trade groups
fighting financial reform. Many of these same businesses are now
spending as much money, if not more, to lobby for curbs on the new
law.

... middle of last decade, I was co-author of financial industry privacy
standard (x9.99). As part of the effort, there were interviews with
people involved in HIPAA (overlap between privacy provisions and health
information could leak in things like credit card statements, like line
item for specific tests). One of the comments was that the legislation
had been originally drafted in the '70s, but heavy lobbying kept it from
being passed for decades ... and even once it was passed, things like
providing security for health information was delayed for years ... and
then any enforcement/penalties regarding security measures was delayed
even further.
https://en.wikipedia.org/wiki/Health_Insurance_Portability_and_Accountability_Act

like:
Subtitle D of the Health Information Technology for Economic and
Clinical Health Act (HITECH Act), enacted as part of the American
Recovery and Reinvestment Act of 2009, addresses the privacy and
security concerns associated with the electronic transmission of
health information.

End of last century, we were tangentially involved in the cal. data
breach notification act. we had been brought in to help wordsmith the
cal. electronic signature act and lots of the participants were
heavily involved in privacy issues. they had done detailed privacy
surveys and found the #1 issue was "identity theft" ... in large part
the form "account fraud" ... frequently as the result of some data
breach. There seemed to be little or nothing done about data breaches
(in large part because the fraud is against customers and not those
having the breach) and appeared to be some hope that the notifications
would provide some motivation to do something.

Since, then there has been several federal (pre-emption)
"notification" bills introduced ... those simialr to the
cal. legislation and approx. equal number that would eliminate
notification.

The same entities were also in the process of doing an "opt-in"
privacy sharing legislation (i.e. institutions can only share personal
information when specifically authorized). Then there was federal
pre-emption privacy sharing "opt-out" provisions added to GLBA (also
known for repeal of Glass-Steagall, playing significant role in the
ecomonic mess, contributing to elimination of barriers between the
individual hot-beds of fraud and corruption ... leading to economic
firestorm).
https://en.wikipedia.org/wiki/Gramm%E2%80%93Leach%E2%80%93Bliley_Act

Note: "opt-out" allows institutions to share personal information
unless you specifically object. Now, while working on x9.99, I
attended an annual privacy conference in Wash. DC. There was a panel
discussion with the FTC commissioners. During the discussion, somebody
in the audience got up and asked if they were ever going to do
anything about "opt-out". He said he was involved with the major
financial industry call-centers, and claimed that the "opt-out"
call-in lines had no way of recording information from the call (no
record of any "opt-out"). Just another example that during the
economic mess, not only was there a lot of de-regulation that help
fuel the mess ... but there was also significant amount of just
failing to enforce what regulations remained.

at the end there was discussion of theory that complex centralized
planning wouldn't allow their societies to collapse ...
Complex societies are characterized by centralized decision-making,
high information flow, great coordination of parts, formal channels of
command, and pooling of resources. Much of this structure seems to
have the capability, if not the designed purpose, of countering
fluctuations and deficiencies in productivity.

... snip ...

and why they might. The more successful they were in the past, the
more resistant they could be to change:
In reasoning by false analogy after World War I, French generals made
a common mistake: generals often plan for a coming war as if it will
be like the previous war, especially if that previous war was one in
which their side was victorious.

... snip ...

Another line was that leaders with investment in the status quo are
least likely to change & adapt because they have the most to loose:
Throughout recorded history, actions or inactions by self-absorbed
kings, chiefs, and politicians have been a regular cause of societal
collapses, including those of the Maya kings, Greenland Norse chiefs,
and modern Rwandan politicians discussed in this book.

... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

Robert Morris, man who helped develop Unix, dies at 78

bbreynolds <bbreynolds@aol.com> writes:
My job was to get RJE working from the Unix environment to our
370/148 running whatever level OS then current (VS2?). I could not
get it work following the AT&T documentation, until I realized that
they (AT&T) had misunderstood the definitions of "primary" and
"secondary" in the IBM binary synchronous protocol.

Most likely VS1 ... in the transition from real-storage (360) to virtual
memory (370), os/360 MFT-II morphed into VS1 and os/360 MVT morphed into
VS2.

370/148 (&138, aka virgil/tully) had lot more memory and faster than
original 370/145 ... as well as quite a bit of m'code space ... so the
machines had certain pieces of VS1 dropped into microcode at 10:1
performance increase and certain pieces of vm/370 also dropped into
microcode.

then the product manager con'ed me into running around the world helping
him do presentations to various country marketing executives (business
planners and forecasters).

About the same time, some guys in POK con'ed me into doing a 5-way SMP
project based on 370/125-II ... I designed a lot of new microcode and
kernel operation for them.

Then the two groups (Endicott 148 and POK 125-II 5-way SMP) started
viewing each other as competitors and I was expected to attend
escalation meetings and be responsible for arguing both sides.
misc. past posts mentioning 5way SMP activity:
http://www.garlic.com/~lynn/submain.html#bounce

--
virtualization experience starting Jan1968, online at home since Mar1970

The industry paid lobbyists $1.3 billion in 2009 and through the first
three months of 2010, according to the Center for Public Integrity,
which added up the spending by the 850 businesses and trade groups
fighting financial reform. Many of these same businesses are now
spending as much money, if not more, to lobby for curbs on the new
law.

there are periodic references that all the drama and conflict between the
political parties is obfuscation and misdirection for the public; kind
of roman circus (pure facade). it also drives/motivates huge lobbying
lining congressional pockets (sort of like auction frenzy driving higher
and higher bids for congressional votes)

Another line was that leaders with investment in the status quo are
least likely to change & adapt because they have the most to loose:
Throughout recorded history, actions or inactions by self-absorbed
kings, chiefs, and politicians have been a regular cause of societal
collapses, including those of the Maya kings, Greenland Norse chiefs,
and modern Rwandan politicians discussed in this book.

... snip ...

aka ... in IBM's case, top corporate management.

In the 90/91 time-frame we would periodically drop by Somers and have
various discussions with people there about necessity for change. They
could all discuss the issues intelligently ... but we would return
later and see no change ... it was almost as if they were holding off
the drastic changes (for as long as possible, preserving the status
quo .... possibly until after they had retired ... and then it would
be somebody else's problem)

The OODA-loop "exit"

one scenario is OODA-loop was to illustrate continuous adjustment
... as opposed to just doing something and stopping; in that sense,
OODA-loop can stop when the associated activities also stop.

enter/leave tends to be associated with spatial (enter/exit house),
while start/stop tends to be associated with temporal constructs. some
things have poor spatial analogy, like enter/exit "RED" or enter/exit
"flying"

individuals can be more comfortable with spatial metaphors, for
instance if there is a sequence of activity that occurs in time,
sometimes there is a spatial "path" metaphor where entering/exiting
the path is used (at what point in the sequence do things start, aka
sun entering/exiting active phase).

I've asserted that there is flatlander aspect to OODA-loop
... rather than purely iterative sequence of activity ... with
perimeter of circle being used for spatial path analogy ... where
enter/leave metaphor can be applied ... all parts are occurring
continuously and concurrently, for instance orientation not just
depending on the previous observation but all previous observations,
orientations, decisions, and actions. OODA-loop then looses
common spatial analogy ... also part of Zen thread
http://lnkd.in/ngnYM2and
http://www.garlic.com/~lynn/2011h.html#39 Zen and Connaturality
http://www.garlic.com/~lynn/2011h.html#42 Zen and Connaturality

with respect to societies collapse ... there is issue of dealing with
new environment assuming it is similar to past situation ("false
analogy" for French & WW2). In adaptive feedback control algorithms,
it comes up with regard to how much past history and how to "weight"
past history ... and/or discontinuities that reset past history. This
has comes up in hedging ... assuming that there aren't discontinuities
and future is approx. linearly related to recent past. Then the
discontinuities result in massive failures, there was concern that
late 90s hedge industry failure could be a systemic risk with
cascading failure of the financial industry and there was
extraordinary weekend session to patch things together.

--
virtualization experience starting Jan1968, online at home since Mar1970

At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened

From: lynn@garlic.com (Lynn Wheeler)
Date: 14 Jul, 2011
Subject: At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
Blog: Mainframe Experts

when it came down for the actual RFP, even after all the preliminary
work, various internal politics prevented bidding (even when director
of NSF wrote a letter to the corporation, including the CEO ... and
there was references to what we already had running was at least five
years ahead of all NSFNET Backbone RFP responses ... to build
something new). The director of NSF warned that the letter might make
the internal politics worse than they already were (which it did)

Peter Flass <Peter_Flass@Yahoo.com> writes:
From what I gather the machine (SCAMP originally) was heavily
microcoded, I believe the original acronym is something like "Small
Computer with APL Microcode <something>". I don't know too much about
it.

in the mid-70s, the US HONE datacenters were consolidated in building
across the back parking lot from Palo Alto Science center. other trivia,
looking at web satellite phote of the area ... a newer building was
built next to HONE datacenter (which has a different occupant now)
... that newer bldg is now occupied by FACEBOOK.

--
virtualization experience starting Jan1968, online at home since Mar1970

Joint Design of Instruction Set and Language

John Levine <johnl@iecc.com> writes:
The Berkeley RISC and the SPARC were, in practice if not in theory,
designed around the PCC C compiler. The reason they used register
windows was that PCC wasn't smart enough to minimize register saves.
The IBM 801, designed at the same time with the much more advanced
PL.8 compiler had a normal register set and load/store multiple
since the compiler was able to minimize the number of registers
to store.

at presention in '76, the 801 group claimed that the hardware
simplification would be traded off against the sophistication of cp.r
operating system and pl.8 compiler.

no hardware domain protection would be compensated by pl.8 compiler only
generating correct code, and cp.r operating system only loading correct
programs. the few number of virtual address segment registers would be
compensated by inline application being able to switch virtual address
segment registers ... as easily as general purpose (address) register
values, can be switched. 801 would also not have any cache consistency
(since 370 & FS had very high penalty for cache consistency).

circa 1980, there was big internal push for migrate large number of
internal microproceessors to 801 (Iliad) ... including lots of
controller microprocessors and the microprocessors used in low-end and
mid-range 370 (aka followon to 4341, the 4381 was originally to be
801/iliad processor). misc. old email mentioning 801, risc, iliad, etc
http://www.garlic.com/~lynn/lhwemail.html#801

when most of those efforts failed, some number of engineers left to do
risc efforts at other companies.

the 801/ROMP was originally going to be for the follow-on to the
displaywriter ... when that effort failed, they looked around and
decided on retargeting for the unix workstation market. as part of that,
the company that had done the AT&T port for pc/ix, was contracted to do
a AT&T port to romp (becoming aixv2). hardware protection domain was
also needed in romp for transition from cp.r to unix. misc. past posts
mentioning 801, risc, romp, rios, power, power/pc, etc
http://www.garlic.com/~lynn/subtopic.html#801

Low Carb Mavericks, John Boyd and the Art of War

Low Carb Mavericks, John Boyd and the Art of War
http://cravingsugar.net/low-carb-mavericks-john-boyd-art-war-OODA-loop.php

Boyd would tell the story about the supercomputer time as being used to
design the F16 ... while the powers-that-be were doing the F15. The
people behind the F15 went to Sec of the Air Force and said they knew
what Boyd was up to .... and it was not authorized ... so the
supercomputer use was theft of gov. resources. They did detail
investigation and never found any proof of his computer use (which was
carefully obfuscated)

aka the purpose of the investigation was to shutdown the F16 effort
(viewed as competition by the F15 forces) ... and was somewhat
immaterial that it would send Boyd to Leavenworth for the rest of his
life ... some x-over with Boyd's To Be or To Do that the best you
might hope for is kick in the stomach.

they were pushing token-ring as solution for enormous complexity and
weight problems from 3270 coax cable ... running point-to-point from
datacenter to each 3270 terminal in the building. cat5 was lighter could
run from terminal to local wiring closet ... then with single cat5
(aggregating multiple terminals) to datacenter (or even multiple wiring
closet hierarchy between terminal and datacenter) ... eliminating the
enormous cable runs from each terminal all the way back to datacenter
(in some bldgs, there was danger of exceeding bldg load limit just from
cable weight).

including aggregate 3tier infrastructure using ethernet was both cheaper
than terminal emulation environment (with token-ring), but also provided
significant more bandwidth (and services) to client.

in the same time-frame, the new almaden research bldg had been
extensively wired for (16mbit token-ring) cat5 ... but found (10mbit)
enet provided both higher bandwidth and lower latency (than 16mbit T/R).

the dallas e/s center came out with report showing 16mbit token-ring was
far superior to enet ... but I believed the only conceivable way was
they used early/original 3mbit enet (before listen-before-transmit).

--
virtualization experience starting Jan1968, online at home since Mar1970

latency and aggregate bandwidth of 16mbit T/R (worse than 10mbit enet)
was because latency for token to transition the ring. that was
separate from technology of 16mbit T/R cards and individual cards
having significantly worse throughput than 10mbit enet cards.

PC/RT workstation had done their own 4mbit T/R card for the AT
bus. however for the RS/6000 with microchannel, the group was told
they couldn't design their own cards and mandated to use PS/2
microchannel adapter cards (helping their PS/2 brethern, however for
lots of things that restricted RS/6000 to thruput of PS/2).

in large part because of the "terminal" emulation push, design point
was to have 300 (or more) stations sharing 16mbit bandwidth ... with
very little activity per station. result was that the PS/2 16mbit T/R
microchannel adapter card had lower per card thruput than the PC/RT
4mbit T/R AT-bus adapter card.
http://www.garlic.com/~lynn/subnetwork.html#emulation

Since PC/RT & RS/6000 tended to very much be client/server environment
... a PC/RT server with 4mbit T/R adapter into network could have much
higher thruput than RS/6000 server with 16mbit T/R adapter
(client/server environment tending to asymmetric bandwidth; i.e. server
requirements tending to the aggregate of all the clients).
http://www.garlic.com/~lynn/subtopic.html#801

All that is also separate from extremely high markup on the 16mbit T/R
cards ... price over ten times that of readily available
high-performance 10mbit enet cards.

disclaimer: my wife is one of the inventors on token-passing patent
from late 70s.

--
virtualization experience starting Jan1968, online at home since Mar1970

Joint Design of Instruction Set and Language

anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
And IA-64, which was designed for extremely sophisticated compiler
technology, has the register stack, which is a more flexible version
of register windows. Register windows may not help much in SPEC CPU,
but programs that are actually used are compiled with separate
compilation, use dynamic linking and indirect calls (e.g., virtual
function calls), and even sophisticated compilers are quite limited in
their register allocation capabilities under these circumstances.

Announcement of the disk drive (1956)

Peter Flass <Peter_Flass@Yahoo.com> writes:
I'm just reading "IBM's 360 and Early 370 Systems" now, and got to the
chapter on disks. Apparently drums preceded disks, so disks probably
weren't seen as that much of an innovation, but only as a larger,
slower drum.

in the mid-60s, there were drums, disks and the 2321 "data-cell"
... collectively known as DASD (direct access storage device) .. to
differentiate from tape and other mediums.

in the 70s, i transferred to san jose research (bldg. 28 on san jose
plant site) ... and they would let me play disk engineer over in
bldgs. 14 & 15 (across the street). misc. past posts
http://www.garlic.com/~lynn/subtopic.html#disk

sparse references to drums in any of the above (almost like they clensed
mention of drums in general storage references, seemed to start when
they sold off the san jose plant site and san jose plant site storage
web pages went away).

in the late 80s, a senior disk engineer got a talk scheduled at
world-wide, annual, internal communication group conference ... and
opened the talk with the statement that the communication group was
going to be responsible for the demise of the disk division. the issue
was that the communication group had strangle-hold on the datacenter
... had corporate "ownership" for everything that crossed the datacenter
walls. the issue was that communication group stangle-hold including
attempting to preserve the terminal emulation paradigm (and install
base) and data was fleeing the datacenter to more distributed computing
platforms (the disk division had developed a number of products to
address the situation, but the communication group had veto'ed them).
recent related thread:
http://www.garlic.com/~lynn/2011i.html#58 Speed matters: how Ethernet went from 3Mbps to 100Gbps... and beyond
http://www.garlic.com/~lynn/2011i.html#60 Speed matters: how Ethernet went from 3Mbps to 100Gbps... and beyond

Science center first did cp/40 having added virtual memory hardware to
360/40. cp/40 morphed into CP/67 when they were able to obtain 360/67
that came standard with virtual memory hardware. Later cp/67 morphed
into vm370 when virtual memory became standard on 370s. lots of past
posts mentioning science center
http://www.garlic.com/~lynn/subtopic.html#545tech

TSS/360 was the "official" software for 360/67 ... but they had numerous
difficulties. Running same exact simulated application script for
fortran edit, compile and execute ... I got better throughput and
responses for 35 simulated CP67/CMS users than the IBM SE got with 4
simulated TSS/360 users (running on same identical 360/67 hardware).

In the 70s, the massive (failed) Future System effort (was going to
completely replace 370) heavily used the single-level-store from
TSS/360.

... snip ...
S/38/AS400

The massive (failed) Future System effort in the early 70s was going to
completely replace 370 and drew heavily on single-level-store design
from TSS/360. The folklore is that when FS failed, several people
retreated to rochester and did a simplified, FS subset as
S/38. misc. past posts mentioning future system
http://www.garlic.com/~lynn/submain.html#futuresys

I had learned a lot at the univ. watching tss/360 testing and its
comparison with cp67/cms. Later at the science center in the 70s (during
the future system period), I continued to do 360/370 stuff ... including
a page-mapped filesystem for CMS (which never shipped in standard
product) ... avoiding a lot of the tss/360 pitfalls (I would also
periodically ridicule the FS effort ... with comments that what I had
already had running was better than their bluesky stuff).

... snip ...
CTSS, Mutlics, CP40, CP67, etc

note that some of the CTSS people (MIT IBM 7094) went to the 5th flr of
545 tech sq and did MULTICS; others went to the science center on 4th
flr of 545 tech sq and did (virtual machine) cp40, cp67, vm370, etc

... snip ...
360/67, tss/360, cp67, mts, orvyl/wylbur

There were quite a few customers sold 360/67 with the promise of running
tss/360. when tss/360 looked like it was going to be difficult to birth
... many switched to os/360 or cp67. Michigan did its own (virtual
memory) MTS system and Stanford did its own (virtual memory)
Orvyl/Wylbur system. Later the Wylbur part was ported to os/360

... snip ...
VMA, virtual machine microcode assist

cp40 & cp67 provided virtual machine support by running the virtual
machine in problem state and taking the privilege/supervisor state
interrupts for supervisor state instructions and simulated them. Later
for vm/370 and 370, virtual machine microcode assist was provided on
370/158 and 370/168 which would executive frequently executed supervisor
state instructions according to virtual machine rules.

A superset of this was extended for 370/138 & 370/148 called ECPS
... which included dropping parts of vm370 supervisor into
microcode. There was an attempt to ship all 138/148 machines with VM370
pre-installed ... sort of a early software flavor of LPARS ... which was
overruled by corporate hdqtrs (at the time there were various parts of
the corporation working on killing vm370).

A much larger and more complete facility was done for 370/xa on 3081
called SIE.

Amdahl came out with a "hardware" only "hypervisor" function ... sort of
superset of SIE ... but subset of virtual machine configuration.

IBM responded with similar facility PR/SM on the 3090 ... which was
further expanded to multiple logical partitions as LPARS. PR/SM heavily
relied on the SIE microcode implementation ... and for a long time a
vm/370 operating system running in an LPAR couldn't use SIE ... because
it was already in use for LPAR. It took additional development where
vm370 running in an LPAR (using SIE) could also use SIE for its own
virtual machines (aka effectively SIE running under SIE).

... snip ...
s/38 single-level-store

one of the shortcomings of simplified s/38 single-level-store was it
treated all disks as common pool of storage with scatter allocation
across the pool. As a result all disks had to be backed up as a single
integral filesystem and any single disk failure would require a whole
filesystem restore (folklore about extended length of time to do a
complete restore after single disk failure) . single disk failures were
fairly common failure mode and s/38 approach scales up poorly to
environment with 300 disks (or more) ... aka on any disk failure take
down the whole system while the complete configuration was restored (or
length of time the system would be down for complete backup).

this shortcoming was motivation for s/38 to be early adopter of RAID
technology ... as means of masking single disk failures.

... snip ...
virtual paging under virtual paging

VM370 supported the memory of the virtual machine with demand paged
virtual memory managed by an approximation to global LRU. MVS/370
running in a virtual machine, managed its virtual pages (in what it
thot was "real memory") with an LRU approximation.

LRU or least recently used ... assumed that a page that hasn't been
used for the longest time is the least likely page to be used in the
future. It can be paged out and the real storage allocated for some
other use.

It was possible for MVS/370 with its LRU paging to get in pathological
situation when running under vm370 (with its LRU paging). VM370 will
select an MVS/370 virtual machine virtual page to be replaced (paged
out) because it hasn't been used for a long time (aka least recently
used). However, if MVS/370 is also paging (using LRU page
replacement), that same page is also the one that MVS/370 will decide
to use next (i.e. invalidating the assumptions behind vm370's
least-recently-used page replacement)

it is spending quite a bit on how the invention of the alphabet
enabled logic and thinking about thinking (the words were abstraction
of what they were suppose to represent). it sort of says that before
it was available, people dealt with their environment based on what
they had (personally) experienced. afterwards people could start to
deal with their environment based on logic & abstraction (w/o
necessarily having personal experience).

--
virtualization experience starting Jan1968, online at home since Mar1970

Architecture / Instruction Set / Language co-design

mac <acolvin@efunct.com> writes:
Marketing it that way didn't make it so. I can't find my copy of Organick's
book, but I remember at the time that this seemed bogus.

The 432 had hardware object protection of the kind that might be useful for
machine language and C, but should have been unnecessary for Ada's strong
typing.

It's true that the 432 had support for garbage collection, bug it wasn't
clear that Ada requirdd this.

The 432 also promoted the idea of "the Silicon Operating System". Since
OS's are expensive and unreliable, we'd be better off if they were
implemented in hardware. I heard that elements of the design wound up in
the 286 et seq.

432 group gave presentation at acm sigops (asilomar, 79? or 81?). one of the
big problems was that there was large amount of complex code directly in
silicon ... which had tendency to have bugs ... and it was very
expensive to correct.

Wasn't instant messaging on IBM's VM/CMS in the early 1980s

CP/67 provided (instant) messages between users on the same machine.
Pisa Science Center added SPM command to CP67 which allowed
virtual machine to "intercept" things like cp messages and other stuff
under software control. In the early 70s this was migrated to vm370
and was used internally by RSCS to support both "commands" sent as
messages as well as forwarding messages to users at other nodes on the
internal network (providing "instant" messaging on the same machine as
well with users on other nodes in the network). The RSCS that shipped
to customers in '76 included SPM support. In the late 70s, the author
of REXX implemented a multiuser client/server spacewar game using the
facility.

I was blamed for online computer conferencing on the internal network
in the late 70s and early 80s (internal network was larger than
arpanet/internet from just about the beginning until possibly late '85
or early '86). Somewhat as a result, there was a researcher paid to
sit in back of my office for 9months taking notes on how i
communicate. They also got logs of all my instant messages and copies
of all my incoming and outgoing email. This became research report,
stanford phd thesis (joint computer ai and language) and some number
of papers and books. references to computer mediated coversation
http://www.garlic.com/~lynn/subnetwork.html#cmc

person that testified in Madoff hearings about trying unsuccessfully
for a decade to get SEC to do something about Madoff, replied to
question about need for new regulations, that while new regulation may
be necessary, much more important was transparency and visibility
(which is pretty much antithesis of culture around wallstreet)

In the wake of Enron, SOX was passed requiring very expensive audits
for public companies but required SEC to do something. Possibly
because GAO (also) didn't think SEC was doing anything, it started
doing reports of uptic in public company fraudulent financial filings
(even after SOX). SOX also required SEC to do something about rating
agencies (which played pivotal role in the financial mess). In the
rating agency hearings, there was comment that the rating agencies
could possibly avoid federal prosecution with blackmail threat of
credit down rating.

and with respect to enron, sox, sec, gao, etc ... quote seen on the
web: "Enron was a dry run and it worked so well it has become
institutionalized"

There have been quite a few federal bills introduced in the more than
decade since cal. legislation, about evenly divided into approx. the
same as the cal. legislation and those that would eliminate
notification.

note that about the same time as the cal. notification legislation,
cal. was also working on an "opt-in" privacy sharing bill (only can
share when explicitly authorized), when an (federal pre-emption)
"opt-out" sharing provision was added to GLBA (other provisions
contributed to financial bubble). Middle of last decade, there was
annual privacy conference in WashDC that included panel discussion
with FTC commissioners. From the audience a person said that they were
associated with call center operations and claimed major financial
centers didn't bother to record/keep any information from 1-800
"opt-out" calls (no record of people declining personal information
sharing), and wondered if the FTC would ever investigate

disclaimer: I was co-author of financial industry x9.99
privacy standard.

charlesm@MCN.ORG (Charles Mills) writes:
Somewhat OT but why? Why not C on the mainframe? Why two code bases, one
fairly easy to debug and one relatively hard to debug?

I am thrilled with writing software for the mainframe in C (C++ actually)
after years of laboring in assembler.

the los gatos vlsi lab was using metaware for a lot of (mainframe) vlsi
tool development. two people from the group then did mainframe pascal
compiler ... which eventually evolved into vs/pascal product.

I was working on getting one of the people (responsible for mainframe
pascal) to do C language front-end ... when he left and went to work for
metaware. when the palo alto group was planning on doing BSD unix for
mainframe, I talked them into contracting with metaware for the C
compiler. However, before that mainframe BSD unix shipped, the group was
retargeted to PC/RT ... eventually coming out with "AOS" (bsd unix
running on pc/rt) ... but still using metaware's c compiler.

the disk division eventually sponsored the posix support on MVS ... one
of the many things they were doing to try and get around the
stranglehold that the communication group had on the mainframe
datacenter (most of which the communication group vetoed ... since the
communication group had strategic ownership for everything that crossed
the datacenter walls; disk division being hdqtrd in silicon valley
possibly helped with their perspective)

misc past posts mentioning disk division talk at annual, internal,
world-wide communication group conference that started out with the
statement that the communication group was going to be responsible for
the demise of the disk division (the communication group stranglehold
was already resulting in data fleeing the mainframe datacenter to more
distributed computing friendly platforms).
http://www.garlic.com/~lynn/subnetwork.html#terminal

... left and did a lot of consulting for various silicon valley chip
shops. At one place, he did a lot of work and enhancements for the AT&T
C compiler (and some number of other vendor C compilers) for their
operations on mainframe (as part of porting BSD vlsi tools to the
mainframe). At one point he was doing a lot of work doing mainframe
ethernet support as part of supporting SGI graphics workstations for
displaying VLSI designs. The salesman dropped in and asked him what was
going on and after being told, the salesman suggested that he should be
doing token-ring support instead (or otherwise the customer might find
mainframe support and maintenance suffering). Afterwards, I got a phone
call and had to listen to several hours of comments about the company,
local branch office and salesmen. The next morning, the vlsi company had
big press release that they were moving off mainframe to unix servers.

--
virtualization experience starting Jan1968, online at home since Mar1970

john_w_gilmore@MSN.COM (john gilmore) writes:
This morning's New York Times, which perhaps not quite all of you see,
contains a piece attributing IBM's unexpectedly good financial results
to sales of new mainframes.

from above:
Software sales advanced 17 percent, evidence that Chief Executive
Officer Sam Palmisano is making headway on efforts to bulk up in that
area, in addition to services, IBM's mainstay. Together, the divisions
accounted for 80 percent of IBM's sales in the quarter, up from 65
percent a decade earlier.

... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

1st cp67 installation outside science center was at lincoln labs
(univ. I was at was the 2nd outside science center). then fairly
quickly, there were two cp67 commerical timesharing service bureaus
startups, one by the head of lincoln labs (with some others from lincoln
labs). recent cp67 reference
http://www.garlic.com/~lynn/2011i.html#54 Before the PC: IBM invents virtualisation (Cambridge skunkworks)
http://www.garlic.com/~lynn/2011i.html#63 Before the PC: IBM invents virtualisation (Cambridge skunkworks)

and a whole lot of 64bit sparcs ... regardless of what sun/oracle is
doing.

recent comment (in linkedin) thread that possibly single blade
mega-datacenters may have more MIPs than the aggregate of all
currently installed mainframes:
http://www.garlic.com/~lynn/2011i.html#9 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened

note that original 64bit sparc was being done at HAL (initials from
former head of ibm 801/risc workstation division and head of sun
manufacturing, there was glitch at last minute with sun objecting to
participation by former sun employee) ... heavily funded by fujitsu
... eventually absorbed into fujitsu.
https://en.wikipedia.org/wiki/HAL_Computer_Systems

DG Fountainhead vs IBM Future Systems

Quadibloc <jsavard@ecn.ab.ca> writes:
Yes; if one is using MIPS as a measure of power rather than just
number of instructions per second, I think that the usual standard for
"1 MIPS" is a KDF 9. Although I've seen the VAX proposed for that role
as well.

jmfbahciv <See.above@aol.com> writes:
In SMP, the controllers had to have a connection to each CPU. That way one
didn't lose the devices when a CPU was removed. this also happened with
disk controllers and disk drives. One port of the disk drive would be
hooked into one controller on one CPU and the other port would be hooked
to another controller in another CPU. If we had 3 or 4 port drives, they
would have been hooked into the third and fourth CPUs.

standard 360 (360/65) SMP (2-way "tightly-coupled") had both processors
sharing same/common memory ... but each processors had dedicated
channels (for I/O). SMP I/O configurations were simulated by having
"twin-tailed" controllers (two channel interfaces) that had connections
to channels on the different processors.

there was 360/67 ... in the uniprocessor version it was nearly identical
to 360/65 but with the addition of virtual memory support. however, the
SMP version quite a bit of additional engineering, up to four processors
sharing memory (although I think only 3ways were actually built) and it
had "shared" channel support (any processor could access any channel).
360 shared channels weren't seen again until more than decade later with
3081. recent mention of 360/67
http://www.garlic.com/~lynn/2011i.html#54 Before the PC: IBM invents virtualisation (Cambridge skunkworks)
http://www.garlic.com/~lynn/2011i.html#63 Before the PC: IBM invents virtualisation (Cambridge skunkworks)

there were also "loosely-coupled" configurations where the processors
didn't have any shared memory ... but had twin-tailed controllers to
different channels on different processors.

in the 70s, 3830 disk controller (for 3330, 3340, 3350 disks) supported
four-tailed operation (controller could be connected to four different
channels on four different processors). There was also "string-switch"
... sort of a sub-controller ... where a string of 8 3330 drives could
be connected to two different 3830 controllers (where each controller
might have four channel connections, allowing up to eight channel paths
to a single string of disks).

ACP/TPF leveraged loosely-coupled operation for both "fault isolation"
(various kinds of storage overlays that could corrupt SMP operation) as
well as processing growth. For 3830, there was also special ACP "locking
facility" ... that supported logical locks providing much finer
granularity than the traditional device reserve/release. referenced
in this old email (also mentioned in the TPF wiki page):
http://www.garlic.com/~lynn/2008i.html#email800325in this post
http://www.garlic.com/~lynn/2008i.html#39 American Airlines

disclaimer ... long ago and far away, my wife did stint in POK (ibm
large mainframe hdqtrs) in charge of loosely-coupled architecture.

One of the issues for ACP/TPF was that 3081 wasn't going to have a
non-SMP version and ACP/TPF didn't have tightly-coupled support (only
loosely-coupled support). There were all sort of antics that were done
to try and satisfy the ACP/TPF customers (keep them from all going to
Amdahl ... which was shipping non-SMP products). One of the interim
things was hack to vm/370 specifically for improving thruput of TPF
running in single virtual machine on 3081 (but significantly increased
virtual machine overhead for all customers running vm370 multiprocessor
configurations). Eventually there was a specially modified 3081 with one
of the processors removed ... sold as 3083 (one of the problems was 3081
was internally wired with processor0 at the top of the box, simply
removing processor1 in the middle of the box would have left the box
top-heavy and some danger of tipping). Later there was special 3083
channel microcode load tailored for distinctly TPF I/O characteriscs.

The Unix revolution -- thank you, Uncle Sam?

jmfbahciv <See.above@aol.com> writes:
The new computer center was designed to have glass walls so that the
kiddies could see the gear and watch the operators working. After
the second or third bomb threat, the director was very sorry he
insisted on the glass walls. It was supposed to be part of student
education so they would know what a computer looked like.

in the early 80s, there was incident with former employee driving
vehicle through the glass windows. after that incident there was major
change in computer room design and placement. the "new" almaden research
bldg in the mid80s also had some number of landscaping features making
it difficult for vehicle to enter bldg. some number of those features
increased around gov. bldgs in the last decade.

from ibm jargon:
Drive-in Branch n. ISG HQ in Bethesda, Maryland. Named for an incident
in 1982 when a former IBM employee drove his car through the doors of
the building (which never was a branch office, in fact) and went on a
shooting spree that killed or injured a number of people. Many of the
fortifications around the entrances of IBM buildings date from this
incident. [This usage is unfortunately quite common, being used by
those unaware of the details of the incident. It is considered to be
in bad taste by those who lost friends and colleagues.] See also Rusty
Bucket.

... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

In IBM, there use to be references to Watson & "wild duck"
employees. However, with Opel & Akers, there are references to the FS
failure (and associated loss of face by top executives) resulted in
culture change to sycophancy and make no waves. The joke was that
"wild ducks" were still tolerated as long as they flew in
formation. Recently, as part of the 100th anniversary celebration,
they did video about Watson "wild ducks" ... but it was about "wild
duck" customers (with no reference to "wild duck" employees).

I had gotten blamed for online computer conferencing on the internal
network (larger than the arpanet/internet from just about the
beginning until possibly late '85 or early '86) in the late 70s and
early 80s. The folklore was that when the executive committee
(chairman, ceo, pres, etc) was informed of online computer
conferencing (and the internal network), 5of6 wanted to fire me
(repeatedly over nearly my whole employment, I was told there was no
career or promotions).

... and reads like Success Of Failure culture (make more money from
having several failures) ... which could imply purposefully selecting
for those that support the status quo (and not innovate)

there is also another similar discussion about F35

in the MIC/MICC there can be hundreds of billions involved and
upsetting the status quo can result in serious action (one explanation
is that large percentage are totally amoral ... its nothing personal,
purely business ... aka money).

Boyd's story about doing the F16 was anticipating serious
repercussions from the F15 forces. At one point the F15 forces go to
the secretary of airforce and complain, pointing out that they know
Boyd is doing the F16, it is unauthorized, he has to be using enormous
amounts of supercomputer time, also unauthorized. The unauthorized
supercomputer time amounts to tens of millions in theft of
gov. property. They start an investigation which could send Boyd to
Leavenworth for life ... but extensive audits of all gov. computers
find no evidence of Boyd's use. He would tell that after they gave up,
the person heading the investigation came to him ... saying he would
like to know how it was done ... just for his own personal
information.

He would also tell about taking 18 months leading up to the Spinney
article (18 pages, time magazine in the early 80s). They had to make
sure that they had copy of written authorization for every piece of
information for unclassified congressional briefing. The hearing
itself involved a lot of politics with it eventually being moved to
small, cramp hearing room on Friday afternoon. Sat. morning supposedly
SECDEF is holding damage control meeting and relieved to only find
passing reference in the papers. Then the 18page article hits the news
stands on Monday morning. Investigation kicks off trying to convict
Spinney of anything ... but there is the carefully constructed paper
trail. DOD supposedly creates a new classification, "NOSPIN"
(unclassified but not to be given to Spinney), also SECDEF supposedly
claims that they know Boyd is behind it, has him transferred to remote
outpost in Alaska, and banned from entering the Pentagon for life
(politics again come into play and order is rescinded)

note that in the Success of Failure series, an agency person that
talked to the reporter was charged with all sorts of serious
offenses. When the dust finally settled (last couple weeks) all he was
convicted of was misdemeanor ... nothing to do with talking to the
reporter. There has been some number of articles about gov (&
corporate) serious intimidation of whistle blowers. old reference to
Success Of Failure series:
http://www.govexec.com/management/management-matters/2007/04/the-success-of-failure/24107/

there was also a paper from UK in the wake of the financial bubble
burst about mathematics of regulation and complex systems spinning out
of control when all controls have been removed (my alternate analogy
were allowing individual hotbeds of greed&corruption coming together
in a financial firestorm). I'm also currently reading (on Kindle)
Gleick's recent "The Information: A History, A Theory, A Flood"
... mentions positive feedback resulting in systems running away
(microphone/amplifier/speaker) where negative feedback is necessary to
keep industrial systems from "run away".

from above:
Markets need regulation to stay stable. We have had thirty years of
financial deregulation. Now we are seeing chickens coming home to
roost. This is the key argument of Professor Nick Bingham, a
mathematician at Imperial College London, in an article published today
in Significance, the magazine of the Royal Statistical Society.

There is no such thing as laying off risk if no one is able to insure
it. Big new risks were taken in extending mortgages to far more people
than could handle them, in the search for new markets and new
profits. Attempts to insure these by securitisation -- aptly described
in this case as putting good and bad risks into a blender and selling
off the results to whoever would buy them -- gave us toxic debt, in vast
quantities.