we did some work with a large public utility 15 yrs ago ... they
claimed to have over 6000 RDBMS where possibly 90% of the information
was common.

part of the issue was that early in RDBMS ... there were some
implementation shortcuts ... which resulted in RDBMS efficiency for
things like ATM transactions ... but depended on fairly
static/structured schema. The result was that it was fairly people
intensive to do a schema ... and increased as the different types of
data increased. As a result ... RDBMS tended to be relatively mission
specific ... and past a certain number of different items ... it was
easier to clone an RDBMS and only have the items needed for specific set
of tasks ... leading to large proliferation of different
mission-specific RDBMS in large organization. These frequently could
have large amount of common data ... like customer name & address in
possibly hundreds of different databases.

... and:
fast track - n. A career path for selected men and women who appear to
conform to the management ideal. The career path is designed to
enhance their abilities and loyalty, traditionally by rapid promotion
and by protecting them from the more disastrous errors that they might
commit.

... snip ...

it doesn't mention the disastrous results that it had on organizations
that were unfortunate enough to have an executive position being used
for fast track (having rapid transition by large number of different
individuals that didn't understand that organizations business).
Arbitrary interchangable executives complimented the "Mongolian Hordes
Technique" paradigm:
Mongolian Hordes Technique - n. A software development method whereby
large numbers of inexperienced programmers are thrown at a mammoth
software project (instead of deploying a small team of skilled
programmers). First recorded in 1965, but popular as ever in the
1990s.

... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

for the fun of it ... related to the upthread reference to converting
the internal network to sna/vtam ... from ibm jargon:
notwork - n. VNET (q.v.), when failing to deliver. Heavily used in
1988, when VNET was converted from the old but trusty RSCS software to
the new strategic solution. To be fair, this did result in a sleeker,
faster VNET in the end, but at a considerable cost in material and in
human terms. nyetwork, slugnet

slugnet - n. VNET (q.v.) on a slow day. Some say on a fast day, and
especially in 1988. notwork, nyetwork

Anne & Lynn Wheeler <lynn@garlic.com> writes:
fast track - n. A career path for selected men and women who appear to
conform to the management ideal. The career path is designed to
enhance their abilities and loyalty, traditionally by rapid promotion
and by protecting them from the more disastrous errors that they might
commit.

fast track especially became epidemic in the later half of the
80s. It somewhat complimented the CEO's projection that the revenue
was going to double from its $60B ... and there was massive building
program going on to double manufacturing capacity (and I guess fast
track was attempting to double executives).

I've mentioned before that it was relatively trivial in the mid-80s to
show that the trend was towards commodity hardware (and opposite of
what was being predicted by the CEO, not a very career enhancing
activity). We left during the red-ink period a few years later ... and
I've mentioned that in the executive exit interview, there was the
comment that they could have forgiven me for being wrong, but were
never going to forgive me for being right.

But by that time ... it would have been much more productive and
efficient to use all that additional networking hardware as part of
conversion to tcp/ip.

Mainframe tcp/ip product had been implemented in pascal/vs and had a
few "gotchas" ... however I had done the support for RFC1044 and in
some tuning tests at Cray Research, I could get full channel thruput
between 4341 and a Cray, using only a modest amount of the 4341
processor (possibly 500 times improvement/reducation in instructions
executed per byte moved). Misc. past posts mentioning having done the
rfc 1044 support for mainframe tcp/ip product
http://www.garlic.com/~lynn/subnetwork.html#1044

TCP/IP is the technology basis for the modern internet, the NSFNET
backbone was the operational basis for the modern internet, and CIX
was the business basis for the modern internet. Misc. old email about
NSFNET backbone activity
http://www.garlic.com/~lynn/lhwemail.html#nsfnet

as well as doing a lot of work in the area of high-speed data
transport ... and working with the various entities that were the
likely candidates for the NSFNET backbone. This is old email about
being scheduled to give a presentation to the director of NSF on
NSFNET backbone ... but getting pre-empted for meeting on "processor
clusters" and having to find a substitute to give the backbone
presentation to director of NSF:
http://www.garlic.com/~lynn/2007d.html#email850315in this old post
http://www.garlic.com/~lynn/2007d.html#47

Is email dead? What do you think?

from ibm jargon:
Tandem Memos - n. Something constructive but hard to control; a fresh
of breath air (sic). That's another Tandem Memos. A phrase to worry
middle management. It refers to the computer-based conference (widely
distributed in 1981) in which many technical personnel expressed
dissatisfaction with the tools available to them at that time, and
also constructively criticised the way products were are
developed. The memos are required reading for anyone with a serious
interest in quality products.

... snip ...

as mentioned upthread, I had been blamed for computer conferencing on
the internal network in the late 70s & early 80s, including Tandem
Memos. The folklore was that when the executive committee was informed
of computer conferencing (and the internal network), 5of6 wanted to
fire me.

Tandem Memos was somewhat kicked off after a visit to Jim Gray at
Tandem. Jim had departed Research not long before ... palming off a
bunch of stuff on me ... including working with customers like BofA on
relational database (consulting with IMS group, etc). misc. past posts
mentioning original relational/sql implementation
http://www.garlic.com/~lynn/submain.html#systemr

In the wake of Tandem Memos the corporation instituted officially
sanctioned (and controlled) computer conferencing. from ibm jargon:
BYTE8406 - bite-eighty-four-oh-sixv. To start a discussion about old
IBM machines. forum

BYTE8406 syndrome - n. The tendency for any social discussion among
computer people to drift towards exaggeration. Well, when I started
using computers they didn't even use electricity yet, much less
transistors. forum n. The tendency for oppression to waste
resources. Derives from the observation that erasing a banned public
file does not destroy the information, but merely creates an
uncountable number of private copies. It was first diagnosed in
September 1984, when the BYTE8406 forum was removed from the IBMPC
Conference.

for the fun of it, from ibm jargon:
virtual Friday - n. The Wednesday or Thursday before a long weekend in
the USA for which the Thursday or Friday (respectively) is a
holiday. Usage: Don't hold that meeting tomorrow afternoon - it's a
virtual Friday.

... snip ...

and there is Aloha Friday ... which can also apply to leaving early on
(real or virtual) Friday.

I would sponsor (real & virtual) Friday's after work at some watering
hole near san jose plant site. When new deli moved in across the main
plant site ... for period the back room had my name on it and we got
pitchers of Anchor Steam at half price.

re: friday; cc: friday; since tomorrow is a day off ... today is virtual
friday. Meet at the new deli across from the plant-site. ... this is a
real bummer, plant-site substation is being turned off for the three
days and also Los Gatos lab. is getting it's power turned off
also. Anybody have a machine that will be up and running???? I've
discovered PCTERM running on an IBM/PC ... but it requires an extra
virtual machine(a "service" virtual machine which runs PCTERM and
interfaces to PVM).

Nice that the sun shines around here ... even if it does seem
infrequent. For those that attended the Boyd pitch on Monday, I have the
foil copies. We'll have hard copy to work discussions from. Even we can
figure out what he was talking about, we might be able to get things
straight so if we heard it again ... the subject would be
understandable.

It wasn't a rule that speakers have to speak in YKT before SJR ... it
was that somebody in YKT was telling xxxxxx that we couldn't have Boyd
speak in SJR until after he spoke in YKT. Sounded like a prestige power
play. We did an end run and had him to speak in SJR. xxxxxx has the
guy's name.

A senior corporate executive had been the sponsor of the Kingston
supercomputing effort ... besides supposedly doing their own design,
there was also heavy funding for Steve's SSI. That executive retires
end of Oct91 which resulted in review of a number of efforts,
including Kingston. After the Kingston review, there was an effort
launched looking around the company for something to be used as
supercomputer and found cluster scaleup
https://en.wikipedia.org/wiki/IBM_Scalable_POWERparallel

Security flaws in software development

I had done some analysis of CVE reports (in part support for my merged
security taxonomy & glossary) ... back when they were still being
handled by Mitre (at the time, I had asked some of the Mitre people if
the reports could follow a little more structured reporting; at the
time they said they were lucky enough to get people to fill them out
at all). Old word & word combination frequency analysis
http://www.garlic.com/~lynn/2004e.html#43

not so much that the number of buffer overflows has dropped, but that
other exploits have increased (so buffer overflows are smaller
percentage of larger pie).

I've repeatedly mentioned that buffer overflows are related to
various C language programming characteristics. TCP/IP stack
implemented in various other languages have had almost none of
the buffer overflows that appear in C language implementations
(aka it isn't impossible to have buffer overflows in other
languages ... however it is almost as hard to have buffer
overflows as it is to NOT have buffer overflows in
C). misc. past posts mentioning buffer overflowshttp://www.garlic.com/~lynn/subintegrity.html#overflow

NSA Winds Down Secure Virtualization Platform Development; The
National Security Agency's High Assurance Platform integrates security
and virtualization technology into a framework that's been
commercialized and adopted elsewhere in government.
http://www.informationweek.com/news/government/security/showArticle.jhtml?articleID=229219339

--
virtualization experience starting Jan1968, online at home since Mar1970

The first personal computer (PC)

Mike Barnes <mikebarnes@bluebottle.com> writes:
IIUYC you're pointing out that it's possible to have non-standard key
assignments on your Windows PC. I'm guessing that that's just within one
or more applications, rather than generally. Either way it's hardly
surprising and I'm trying to work out whether I've missed something or
you've gone off at a tangent. My point was not that Ctrl+V was set in
immutable stone, but that it had more-or-less supplanted the alternative
key combination for Paste, which is/was Shift+Ins[ert]. My keyboard
doesn't even *have* an Insert key, though there is the Ins key on the
number pad.

for other topic drift, from ibm jargon:
board games - n. Exercises played by the designers of any new keyboard
(not just IBM's!) in order to retain an advantage over the end users.
The schemes employed can be so perverse that they defy belief at
times. n. Invisible decisions taken by members of some board or
committee, usually with all-too-visible results.

... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

If IBM Hadn't Bet the Company

"Charlie Gibbs" <cgibbs@kltpzyxm.invalid> writes:
The convention here in Canada is to make a long weekend by taking
the following Monday off, rather than the preceding Friday. So
(with the exception of Easter) we don't have virtual Fridays -
but those virtual Mondays are a bitch.

a lot of the holidays in the US have also moved to mondays ... however
since I was responsible for calling "fridays after work" ... when Friday
wasn't a work day ... I would also call "virtual fridays".

--
virtualization experience starting Jan1968, online at home since Mar1970

from ibm jargon:
fast track - n. A career path for selected men and women who appear to
conform to the management ideal. The career path is designed to
enhance their abilities and loyalty, traditionally by rapid promotion
and by protecting them from the more disastrous errors that they might
commit.

... snip ...

it doesn't mention the disastrous results that it had on organizations
that were unfortunate enough to have an executive position being used
for fast track (having rapid transition by large number of different
individuals that didn't understand that organizations
business). Arbitrary interchangable executives complimented the
"Mongolian Hordes Technique" paradigm:
Mongolian Hordes Technique - n. A software development method whereby
large numbers of inexperienced programmers are thrown at a mammoth
software project (instead of deploying a small team of skilled
programmers). First recorded in 1965, but popular as ever in the
1990s.

... snip ...

fast track especially became epidemic in the later half of the
80s. It somewhat complimented the CEO's projection that the revenue
was going to double from its $60B ... and there was massive building
program going on to double manufacturing capacity (and I guess "fast
track" was attempting to double executives?).

However, in the mid-80s, it was clear that hardware was becoming
increasingly commoditized and the business was starting to move in the
opposite direction (in a few years the company was to go into the red)

Note that Boyd's To Be or To Do scenario is a little different from
the "Peter Principle" ... they actually choose to be that way. Both
Boyd'sTo Be or To Do and IBM's Fast Track have the sense that
blemishes are kept off their record. I'm currently reading Bing West's
recently published book "The Wrong War"
http://www.nytimes.com/2011/02/27/books/review/Filkins-t.html

and point is periodically made about various (To Be) officers being
extremely "risk adverse" since blemishes on the record can block
promotion.

... "Peter Principle" can imply that the incompetent
getting promoted, may not be actively involved in the promotion
decisions (almost passive with respect to the events);

Boyd's To Be or To Do has more of the sense that incompetent have
been actively campaigning for the promotions. The corollary involves
individuals that spend the majority of their time & effort actively
involved in "career management" (their nominal responsibilities and
business taking a backseat).

Fast Track seems to actively encourage the later (all of their
energies devoted to managing their careers).

recently posted old folklore about the first major, commercial, "true
blue" customer account to install a large "clone" mainframe processor
in the 70s ... and trying to avoid the "blemish" from showing up on
the branch managers record.
http://www.garlic.com/~lynn/2011c.html#19 If IBM Hadn't Bet the Company

As mentioned in the folklore, it wasn't very career enhancing to
refuse to take the bullet for the branch manager. Also later, in the
mid-80s, it wasn't career enhancing to point out that the business
wasn't going to double, and in fact hardware commoditizing was
starting to drive the business in the opposite direction (leading to
company going into the red).

from ibm jargon:
blue - n. The official IBM company colour, Oxford blue. There was once
a blue letter on Blue on the HONE system, which said that ...the
feature number 9063 (Blue for all System/370 CPUs and peripherals,
called classic blue) will have a slightly changed hue which can lead
to colour mismatch in customer machine rooms. Requests to repaint to
the old hue are not accepted. all-blue, Big Blue

true blue - adj. Of a customer account: using only IBM equipment. all-blue

--
virtualization experience starting Jan1968, online at home since Mar1970

Clone controllers and devices have been written up as motivation for
the Future System effort. During the FS period, they attempted to kill
off most of the 370 activity (they apparently viewed as competitive),
which has been blamed for allowing clone processors to gain a market
foothold. During that period, I would somewhat ridicule them (drawing
comparisons with a continuously playing cult film down at central sq)
and continued to work on 370 stuff. With the demise of FS, there was
mad rush to get stuff back into the 370 product pipelines ... which
contributed to decision to release a bunch of 370 stuff I had been
doing all during the FS period
http://www.garlic.com/~lynn/submain.html#futuresys ... including my "Resource Manager"
http://www.garlic.com/~lynn/subtopic.html#fairsharewhich in turn, got me sucked into attempt to obfuscate why a customer
was installing a clone processor
http://www.garlic.com/~lynn/2011c.html#19 If IBM Hadn't Bet the Company

Earlier as undergraduate in the 60s, one of the things I worked on was
a clone controller ... reverse engineering the channel interface and
building a channel interface board for a minicomputer programmed to
emulate a mainframe controller. Four of us were written up as
responsible for some part of the clone controller business.
http://www.garlic.com/~lynn/subtopic.html#360pcm

Also as undergraduate ... I eliminated 2780 support from HASP (to
reduce real memory footprint) and replaced it with 2741 & TTY/ascii
terminal support along with "context editor" for a form of CRJE (which
I thought it was much better than the later MVT TSO).

Somewhat the motivation for the univ. clone controller effort was that
I was trying to do this automatic terminal identification ... and it
worked being able to switch the line scanner for each port with the
2702 SAD command. Worked fine for directly connected lines ... but
there was a problem trying to use a single dial-up number (& hunt
group) for all terminals ... turns out the 2702 had taken shortcut and
hardwired the line-speed for each port. One of the objectives for the
clone controller was being able to do both dynamic terminal type and
line-speed (for every port). The minicomputer vendor picked up the
implementation and was selling it commercially. That vendor was bought
and the implementation continued to be sold by the new owner. A decade
or so ago, I was in large datacenter handling a major portion of the
US dial-up point-of-sale (card-swipe) terminals ... and a descendent
of our box was handling the incoming traffic (there was claim that it
still used the original channel interface board design).

from ibm jargon
coathanger - n. A computer terminal given to a reluctant old-timer IBM
manager who does not believe in data-processing. This is used as a
convenient object over which to throw the animal fur coat in order to
warm it for wearing home.

... snip ...

later they became management status symbols ... logon in the morning
but never used .... with the PROFS menu screen being burned into the
tube (secretary actually handling all the email). This also led to
situations where projects ordered some number of PS2M80/300 (486
w/300mbyte disk) with 8514 "large screens" (760x1024) for development
projects ... but when delivered they would all be hijacked for
managers desks (never actually being used except for the burnin of the
PROFs menu on the 8514, the bigger the PS2 supposedly a sign of the
manager's status).

Earlier, 3270s were line items in the annual fall budget plan for
developers. Then there was a moment when a few of the top executives
started using email ... and the rapidly spreading news through the
executive ranks diverted nearly the whole annual 3270 allocation to
management desks (even if they never used them, it had to appear as if
they were part of the new online culture).

Jim setup to palm off a bunch of stuff on me when he left
... interface to customers on RDBMS, consulting with the IMS DBMS
group, etc

In the Ferguson & Morris '93 book on IBM ... they mention that FS's
collapse had significant downside for IBM ... perhaps the most
damaging, the old culture under Watsons of free and vigorous debate
was replaced with sycophancy and make no waves under Opel and Akers.

--
virtualization experience starting Jan1968, online at home since Mar1970

I got "portable" 2741 at home, Mar1970 ... which was quickly replaced
with real 2741 ... which I had until 1977 ... when it was replaced
with (ascii) CDI miniterm ... which was then replaced with an (IBM)
(glass teletype) 3101 (code name topaz) ... and then personal
ibm/pc. I don't have pictures of the 2741 ... but this has pictures of
miniterm, 3101, & ibm/pc setup at home (along with my home tieline)
http://www.garlic.com/~lynn/lhwemail.html#oldpict

... I tried to ship "real" networking product in the mid-80s
... initially based on Series/1 ... but being upgraded to RIOS (used
in rs/6000). At the mainframe boundaries, it would emulate 3725NPC to
VTAM host (telling VTAM that all resources were x-domain, but actually
"owned" by the networking infrastructure). Part of a old presentation
that I gave at SNA ARB meeting in Raleigh:
http://www.garlic.com/~lynn/99.html#67

That kicked off some internal politics that could only be described as
"truth is stranger than fiction".

There is folklore that there used to be a department in Armonk which
was responsible for taking presentations and making them appropriate
for presenting in the bldg. ... primarily taking "foil" (also
"overhead") presentations and converting them to "flipchart"
presentations.

In a couple weeks, I'm giving a presentation at a IBM user group
meeting that was original presented as foils at a mid-80s IBM SEAS
user group meeting.

One of the final "nails" in the FS coffin was study by the Houston
Science Center that a FS machine built from the fastest available
technology (at the time, i.e. 370/195), application would have the
throughput of 370/145 (about a factor of 30 times penalty). The S/38
market didn't have that many issues with such throughput slowdown
... as well as numerous other scaleup issue/problems. The failure of
FS has been frequently described as "too ambitious" ... which could be
a polite way of saying that it couldn't scale ... it was possibly the
polite description use for whole sections of FS that was "content
free" ... definitions that were akin to the Emperor's new
clothes parable.

Part of the FS countermeasure to clone controllers was to have
extremely tightly integrated operation. Besides appearing in S/38
... there have also been comments that the highly integrated VTAM/NCP
reflected the FS philosophy. The "problem" was that highly integrated
implied that all components had to be updated as part of a single,
synchronized process (and if there was a glitch in synchronized
upgrade, the whole infrastructure had to be reverted). For small
number of components this wouldn't be viewed as a major
problem. However, for a large operation with significant number of
components (possibly spanning multiple datacenters), synchronized
upgrades (a form of another kind of scaleup problem) became major
issue. Recent post in (linkedin) IBM Historic Computing discussing
past stories about various problems with highly integrated
synchronized upgrades
http://www.garlic.com/~lynn/2011c.html#60

--
virtualization experience starting Jan1968, online at home since Mar1970

The first personal computer (PC)

Evan Kirshenbaum <evan.kirshenbaum@gmail.com> writes:
You could do anything you wanted to at LOTS. What prevented it was a
GETOK jsys call. But if you LOADed the game, went into DDT, and
jfcled out the jump after that call, the system would let you play.

Not that I ever did anything like that...

science center had access by some number of students from institutins of
higher learning in the cambridge area (science center was on 4th flr of
545 tech sq, its 360/67 cp67 machine room was on 2nd flr ... for other
trivia, multics was on 5th flr). misc. past posts mentioning 545 tech
sq
http://www.garlic.com/~lynn/subtopic.html#545tech

one day (early 70s), a student discovered something that would crash the
system (which immmediately rebooted and back up and running) ... and
then proceeded to do it a couple more times. the person was identified,
contacted and told to stop doing it (problem was identified, a fix was
being generated and in the process of being applied). The person did it
a couple more times; they were contacted and told if they did it again,
their access would be revoked. They did it again, and their access was
revoked. They went to their advisor and complained that their access had
been revoked and nobody should have the right to stop them from crashing
the system.

the system would immediately reboot and be back on the air ... this
supposedly motivated multics redesign which was taking possibly hrs to
recover from crash.

The crashes were because a system modification in the TTY terminal
support code ... to support a some sort of ascii device ... i remember
it a plotter device down at harvard ... increasing the maximum line
length to 1200(?) chars. I had originally added the ascii/tty terminal
support to cp67 as undergradudate in the 60s ... and played some games
with one byte length calculations (256 since maximum line length was
80). Increasing the max. length to 1200 w/o fixing the one byte games
... was resulting in incorrect lengths, overlays and the system crashes.

The above (multics) website story has an added note about tss/360 &
tss/370 ... with tss/370 being the true "FS" system architecture (aka
paged mapped and single-level-store).

past couple months ... there was report of percent of US fed tax revenue
between individual & corporate in 50s and currently ... and percent of
total tax revenue coming from corporations has dropped by something like
2/3rds (dramatically shifting tax collections from corporations to
individuals).

this apparently goes along with reports that some large corporations are
paying near zero in taxes because of various tax code provisions ...
(tax code provisions accounting also for recent report that top
wealthiest individuals effective avg. fed tax is 16%).

i've periodically mentioned that economists roundtable from a couple yrs
ago explaining the benefits of going to fed flat rate tax. the scenario
is that the current tax code is over 65,000 pages and the enormous
complexity of dealing with the tax code costs the country in
productivity. claim is that going to flat rate tax reduces the tax code
to 400-500 pages and simplification would result in almost 6% of GDP in
increased productivity (currently wasted/lost by all the direct &
indirect resources going to dealing with special tax code provisions).
The increase in productivity more than offsets the benefit loss of any
specific special provision (every specific special provision may be
viewed as providing some benefit ... but the toll of allowing special
provisions is akin to a death by thousand cuts).

their other observation was because of the enormous amounts of money
involved in special provisions ... contributes to the enormous amounts
spent lobbying ... and US congress having a reputation as the most
corrupt institution on earth. eliminating special provisions supposedly
would go a long way towards eliminating a major fraction of the enormous
corruption.

not if based on the trillions involved ... very few dictatorships can
match the magnitude that is played with by congress ... and the total
bits & pieces that is taken along the way.

there was claim that FIRE lobby (financial, insurance, real estate)
sees enormous ROI for every dollar spent on lobbying. There was claim
that GLBA .. 1999 bank modernization act cost financial industry
approx. $250m (nearly evenly divided between the two parties in
congress). on floor of congress, the rhetoric was the major purpose of
the bill was to prevent walmart & m'soft from becoming banks (walmart
has been notorious for greatly improving efficiency for any business
it gets into, banks would needed to have drastically cut their
enormous margins to remain competitive with walmart; part of the study
at the time was approx. 30% of the country is "unbanked" ... because
current financial institution margins can't afford to provide them
services). note however, GLBA also repealed Glass-Steagall which
played significant role in the current financial mess. During the last
decade, supposedly something like 20 times was spent (by FIRE, as was
spent on GLBA).

Now the GLBA rhetoric on the floor of congress was something like "if
you are already a bank, you get to remain a bank, but if you aren't
already a bank, you don't get to become a bank".

In the bail-outs, the FED was providing too-big-to-fail regulated
depository institutions trillions of dollars at near zero (far exceeding
TARP funds; after long hard fought legal battle in the courts to try and
prevent releasing the information) ... easy to make profits on
difference between cost of money and what they did with it. However,
some of the too-big-to-fail were non-bank wallstreet institutions
... which the fed eventually gave bank charters to (presumably should
have been precluded by GLBA) so they could also have access to
trillions.

being 400-500 pages (instead of one or two) effectively eliminated
nearly all of the regressive issue ... with the increased productivity
more than offsetting all the other downsides. the issue was that
constant special provisions activity has come to be a major part of the
disease that is killing the patient.

eliminating all the ongoing special provision activity and churn in the
tax code has more upside benefit than all the possible downsides.

however, the economist roundtable was somewhat pessimistic
... commenting that the participants are so thoroughly venial and
corrupt along with the magnitude of money involved ... that it probably
would only take a couple of years before they came up some other
mechanism for their graft.

--
virtualization experience starting Jan1968, online at home since Mar1970

The first personal computer (PC)

Peter Flass <Peter_Flass@Yahoo.com> writes:
Taxing corporations _is_ taxing individuals. If they were taxed less,
or not taxed, they could either pay more dividends or their stock
price would go up, increasing the values of everyone's retirement and
any other investments they might have. Tax individuals or tax
corporations, but taxing both seems like double taxation.

note that taxing any legal entity can be construed as taxing some other
entity ... aka taxing (individual) farmers theoritically drives up food
costs which has to be paid for by any other entity.

there is a particular double taxaction with respect to individuals ...
where the dividends are paid out of after tax profits (aka already
taxed) and then individuals receiving dividends are taxed again (the
double taxation theoritically justifies a lower tax rate on dividends
taxes which have aleady been taxed).

the whole thing has somewhat motivated europe's "value added tax"
... where tax is only on incremental added value not on underlying
components that have previously been taxed.

the counter example is not taxing legal entities calling themselves
corporations ... then results in things like individuals (say farmers)
re-organizing all their operations into a corporation and everything
they buy & spend is done as a corporation (their land, equipment, homes,
cars, buildings) ... and all "profits" are moved overseas to tax
shelters which are never repatriated to this country (and never show up
as something to be taxed in this country). oh wait, some of the largest
corporations are already paying zero tax by stuffing profits into
overseas tax havens (the european banks that have been subject of
treasury legal action is only about overseas tax havens for individuals;
corporations are already allowed to move profits offshore to avoid
taxation).

--
virtualization experience starting Jan1968, online at home since Mar1970

Patrick Scheible <kkt@zipcon.net> writes:
More likely, they could give a really nice raise to the CEO and a
pretty good raise to the rest of the executive suite. That seems to
be where all the corporate tax cuts since the Eisenhower
administration have gone.

there have been legal action about corporations in the middle-90s
lobbying for corporate retirement funds to be treated as assets rather
than liabilities. the change resulted in really juicing some
corporations bottom lines and various executives got hundreds of
millions in additional bonuses (which had been written to tie bonuses to
bottom line improvement).

in the past decade ... there was a whole lot of things that SEC wasn't
doing anything about. one was testimony in the congressional madoff
hearings by the person that had tried unsuccessfully for a decade to get
SEC to do something about Madoff.

Then there is all the stuff with heavy leveraging that played a major
role in the financial disaster. Then there are the rating agencies
... that SEC was supposed to do something about ... but didn't ... who
were selling triple-A ratings for toxic CDOs. This provided nearly
unlimited funds for unregulated loan originators ($27T during the
period). It also eliminated any motivation for unregulated loan
originators to care about borrower's qualification or loan quality
(they got their money immediately regardless ... only limit was total
aggregate loans they could write per day). The resulting
no-documentation, no-down, 1% interest only payment mortgages were
gold mine for real-estate speculators (potentially 2000% ROI in
regions with 20-30% inflation ... which was further fueled by
speculation) ... basically these loans became the equivalent of the
"Brokers' Loans" that fueled the '20s stock market frenzy (allowing
speculators to treat the real-estate market like the '20s stock
market).
Evil Wall Street Exports Boomed With 'Fools' Born to Buy Debt
http://www.bloomberg.com/apps/news?pid=newsarchive&refer=home&sid=a0jln3.CSS6c

On the backside of the $27T in triple-A rated toxic CDOs, wallstreet
collected enormous (transaction) fees and comissions. This seems to
account for reports that wall street bonuses spiked over 400% during
the period and the size of the financial industry (as percent of GDP)
tripled during the period. Recently there were reports that traders in
various institutions were churing their triple-A rated toxic CDOs
portfolies ... where they would buy triple-A rated toxic CDOs from
another institution's portfolio when that trader bought an equivalent
amount of triple-A rated toxic CDOs from them. The massive amount of
triple-A rated toxic CDOs might possibly bring down the institution,
but the enormous personal compensation appeared to eliminate any
concern about such events.

The role that repeal of Glass-Steagall (had to play) was that
too-big-to-fail regulated depository institutions now had unregulated
investment bankers that could deal in triple-A rated toxic CDOs
and carry them off-balance (in addition to do loads of portfolio
churning with other institutions). At the end of 2008, it was
estiamted that the four largest too-big-to-fail (regulated banks) had
$5.2T in triple-A rated toxic CDOs being carried
"off-balance". There had been a few transactions for aggregate of
several tens of billions that had gone for 22cent on the dollar. If
those four banks were required to gring the $5.2T back on their books,
they would have been declared insolvent and be forced to be
liquidated. One of the other details that the FED has been forced to
release mentioned that the FED has been buying up these off-balance
assets at 98cents on the dollar (in addition to providing trillions of
dollars at near zero percent).

In the wake of ENRON, congress passed sarbanes-oxley which supposedly
significantly strengthen the auditing for public companies as well as
penalties for falsification. However, all this required SEC to do
something. Possibly because GAO didn't think SEC was doing anything
... it started doing reports of public company financial filings that
it felt had significant problems (fraud &/or audit mistakes)
... which showed uptick even after Sarbanes-Oxley. Semi-facetious
... then Sarbanes-Oxley

1) had no effect on public company fraudulent financial filings

2) encouraged the increase in public company fraudulent financial
filings

3) if it weren't for SOX, all public company financial filings
would be fraudulent.

The major purpose of the fraudulent filings were to boost executive
bonuses ... and even for filings that were later correct, the wasn't a
corresponding correction in the bonuses.

above describes GF11 as a "one-off" special purpose ... while RP3 was
something more of general purpose. In the 80s, FSD had gotten into
heavily funding the RP3 effort and at one point FSD asked my wife to
audit the project. After the audit, FSD terminated RP3 funding.

That may or may not have contributed to having cluster scaleup
transferred at the end of Jan92 and being told we couldn't work on
anything with more than four processors

another reference that mentions both G11 (1987) and RP3 (1985)
http://domino.research.ibm.com/comm/research.nsf/pages/r.arch.abs.html.

oh ... with respect to the line in the google book page about doing
well in the mid-80s (before the red-ink of the 90s) ... see recent
posts in the (Greater IBM) "I actually miss working at IBM" ... that
starts out with reference to fast track and comments about
projections that the $60B revenue was going to double and there was
massive building program to double manufacturing capacity (even tho
the trend in hardware commoditizing was already starting to indicate
the business was heading in the opposite direction). Also archived
here:
http://www.garlic.com/~lynn/2011d.html#12 I actually miss working at IBM

--
virtualization experience starting Jan1968, online at home since Mar1970

"Rod Speed" <rod.speed.aaa@gmail.com> writes:
The ratings agencys had a rationale for the AAA ratings, they
claimed that the default rate was very low, and it was, and that
the bundling that CDOs involved meant that what defaults there
were would just marginally reduce the earnings of a particular CDO.

the congressional rating agency hearings in the fall of 2008 had people
testifying (from the rating agencies) that the people (in the rating
agencies) selling the triple-A ratings "knew" they weren't worth the
triple-A ratings ... all the rest is pure smoke and mirrors to try and
cover up.

there was some amount of discussion in the hearings that the rating
agency business process became misaligned ... when the rating agencies
switched from buyers paying for the ratings to the sellers paying for
the ratings (creating the opportunity for conflict of interest).

in jan2009 there was brief notice from the treasury about some
companies that they would be using to value toxic assets for
purchase. this was when they still believed that the TARP funds could
be used to buy the toxic assets (which is in the name of the
act). However, they fairly quickly found out that the appropriated
funds would hardly make a dent in the amount of triple-A rated toxic
CDOs (just the four largest still having $5.2T being held
off-balance). After that there was no mention of buying up toxic
assets and all the press was about all the other kinds of things TARP
funds would be used for instead.

trivia ... one of the companies mentioned in jan2009 that was going to
be used for evaluting triple-A rated toxic CDOs, had bought the
"pricing services division" from one of the three major rating
agencies ... when the rating agencies were switching from buyers
paying for the ratings to the sellers paying for the ratings (creating
misaligned business process and the opening for conflict of interest
... as per the congressional hearings).

disclaimer: I knew many of the people in the company that bought the
pricing services division ... and at one time interviewed to work for
them.

in any case, a caustic view is that when they switched to sellers paying
for the ratings they no longer needed to accurately valuate the
instruments (or a pricing services division) that they were giving
ratings on.

2nd disclaimer: in the late 90s, we were asked to look at methodologies
for valuing securitized mortgages ... in part because they had been used
during the S&L crisis to obfuscate the underlying value (problem was
widely known in the industry) ... this was before the sellers found out
that they could pay the rating agencies for triple-A rating.

"Rod Speed" <rod.speed.aaa@gmail.com> writes:
It was even worse than that. Those NINJA loans actually paid a
higher commission because the interest rate paid was higher once
the initial sucker rate ran out, and so it was in the loan originator's
interest to write that sort of loan just for the higher commission.

Again, NO ONE, not Bernanke or anyone else saw that coming either.

article that somebody raised the issue in 2003 ... that selling off
loans eliminated any motivation for the loan originators to care about
loan quality or borrowers qualifications ... and the street did their
best to hammer him

above mentions DTCC resisting releasing records that might be used to
show illegal naked sort activity (which cramer claimed to be
wide-spread).

disclaimer ... in the late 90s, I was asked in to NSCC (before they
merged with DTC to become DTCC) to look at improving integrity of
trading transactions (some people from NSCC had been on payments
standard working group where I helped author a standard to significantly
improving integrity of payment transactions). After doing some amount of
work, the activity was suspended with a comment that a side-effect of
the integrity work would have greatly increased the transparency and
visibility of trading transactions (which appears to be anti-thetical to
trading culture). Transparency and visibility was also the number one
thing identified by the person that tried for a decade to get SEC to do
something about Madoff.

There was also a 2008 KPMG survey that found 60% of employees in banking
and financial industry had personally observed misconduct that "could
cause a significant loss of public trust if discovered" (a rate possibly
twice that of other industries).

"Rod Speed" <rod.speed.aaa@gmail.com> writes:
To avoid another great depression or worse.

It took a long time to do anything as concrete as that in the 30s.

We are getting better at it, the unemployment rate only just made
it into double digits for a couple of quarters this time around.

Some countrys like Australia didnt even get a recession this time around.

in early 2009, I was asked to HTML'ize the Pecora hearings (had been
scanned the fall before at the Boston Public Library and were available
online at the wayback machine; i.e. 30s senate hearings into the 20s
speculation frenzy and crash) ... with heavy cross-indexing and HREFs
between what happened then and what happened this time (there was
apparently the belief that the new congress had appetite to do something
about the problem).

After putting in quite a bit of work, I got a call saying it wasn't
needed. About that time there were numerous press items about the
financial industry had three lobbyists in $400 ($2000?) suits on the
hill for every member of congress.

Anne & Lynn Wheeler <lynn@garlic.com> writes:
article that somebody raised the issue in 2003 ... that selling off
loans eliminated any motivation for the loan originators to care about
loan quality or borrowers qualifications ... and the street did their
best to hammer him

one of the points in the above mentions that in 1989 Citi figures out
that its ARM mortgage portfolio could bring down the institution, it
unloads the portfolio and gets out of the business ... needing a
(private) bail-out to remain operating.

roll forward to the end of 2008 ... and Citi's "investment banking unit"
has over a trillion dollars in triple-A rated toxic CDOs being held
off-balance (largest percentage of the $5.2T held by the four large
too-big-to-fail institutions) ... which underneath all the
securitization/CDO obfuscation is fundamentally an ARM mortgage
portfolio ... which could also take down the institution (even if the
triple-A rating wasn't open to question ... the 1989 "risk" wasn't a
loan quality issue ... was purely a issue of adjustable rates) ... and
this time Citi (as well as others) need significant bailout to remain
operating.

an issue in this case was that the Citi institutional knowledge about
fundamental risk in ARM mortgages had apparently evaporated by the
early part of this century (slightly over a decade) ... and/or
their was no transfer of such knowledge to the investment banking
unit (exists courtesy of GLBA and repeal of Glass-Steagall).

An item about the person that took over citi in the mid-90s and was the
major player behind getting Glass-Steagall repealed (having gotten
temporary waivers from Glass-Steagall by the FED as part of the
take-over)

from above:
Sanford Weill, who had built Citigroup into a global financial titan,
but whose final months as chief executive officer were overshadowed by
Spitzer's probe into the relationships between equity research
analysts and investment bankers during the internet boom years. Under
a 2002 settlement with Wall Street banks, Citigroup paid a $400
million fine, and Weill was forbidden to communicate directly with his
company's equity research analysts.

A few years ago I was asked to analyze a industry report that had
detailed bank operating numbers ... something like 60 items per page,
and a couple hundred pages. Each item gave the number for avg. for the
largest regional banks compared to the avg. for the largest national
banks. For whatever reason, the numbers for the regional banks showed
slightly better efficiency than the numbers for the national banks
... fundamentally invalidating the various business justifications for
the too-big-to-fail institutions. The primary remaining justification
was that the executives got bigger compensation and bonuses (which is on
par with the motivation that GAO found for the uptick in fraudulent
financial filings for public companies during the same period ... none
appeared to feel any threat from either SEC or SOX ... rare instances of
fines would be paid by the institution with little downside to the
executives).

One of the issues in the 90s was whether it was nodes or
processors. RIOS/Power had no cache consistency ... and all cluster
nodes were single processors. One of the things in somerset (AIM) was
single chip and cache consistency (supporting multiprocessor shared
memory). This mentions POWER3 configurations with up to 8 processor
(shared memory at each node) with POWER3-based in 99:
https://en.wikipedia.org/wiki/IBM_Scalable_POWERparallel

power3 article mentions it was originally to be called powerpc 630. in
comp.arch newsgroup there have been lots of statements that 630 was a
significant effort by the rochester/as400 group (as opposed to
AIM/somerset group).

this mentions SP2 with power3 multiprocessor nodes, three types of
nodes (thin, wide, and high) and short or tall frames. Frames can be
interconnected with up to 128 nodes (512 by special order ... aka ASCI
white?) using power3-II processors at 375mhz or 450mhz
http://archive.rootvg.net/column_risc.htm

one of the people referenced in the above email, had dropped by after
the LLNL meeting (helping fill in for me at the meeting even tho he
wasn't w/IBM) had coined the term "information utility" sort of
precursor to current cloud (long ago and far away he had been with
Cray & Thornton at CDC)

as implied in the LLNL/4341 email reference (for 70 processors), in
the late 70s, 4300s managed to catch leading edge of customers buying
large collections of smaller processors that had breached some
price/performance threshold. other old 43xx email
http://www.garlic.com/~lynn/lhwemail.html#43xx

Morten Reistad <first@last.name> writes:
The analytical economists saw this, and they saw two other, secondary
indicators. They saw the bond security volumes shift rapidly upwards.
The ratios between AAA, AA, A, AB etc are pretty constant over time.
Shifts of several percent between them is newsworthy stuff. Here the
shift was around 60% from midrange to top (AAA- and up). There was no
conceivable reason for such a shift to exceed 3-4 %.

there was point in 2008 when most of the rest of the community realized
that rating agencies were selling triple-A ratings on instruments that
didn't justify triple-A ... and (at least) the municipal bond market
"froze" ... because investors feared that they may not be able to trust
the ratings on anything. Warren Buffett finally stepped in and started
offerring muni-bond "insurance" ... he was reasonably sure that most
muni-bonds were relatively safe even if there was reason to no longer
trust ratings.

for other topic drift ... this recent post references that sometime
later, FSD asked her to review the RP3 effort (which FSD was heavily
funding) and not long later they terminated their funding for RP3
http://www.garlic.com/~lynn/2011d.html#24 IBM Watson's Ancestors: A Look at Supercomputers of the Past
some more in the above thread
http://www.garlic.com/~lynn/2011d.html#29 IBM Watson's Ancestors: A Look at Supercomputers of the Past

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM and the Computer Revolution

Michael Wojcik <mwojcik@newsguy.com> writes:
OS/2 1.1 introduced Presentation Manager (the GUI); OS/2 1.2
introduced HPFS. Numerous business applications ran on those versions,
making good use of protected mode, as well as database connectivity
and, for IBM shops, SNA networking. It was a much more robust
environment than Windows.

Apple sent shock waves thru the computer industry Thursday by filing
a lawsuit that could frustrate IBM plans to make its personal
computers as easy to use as Apple's.

- Filed in US District Court in San Jose
- raises some basic questions about who owns what when it comes to
look and feel of PC's
- Apple has argued that "user friendly" features of Mac are copyright
protected.
- IBM has staked the future of PS/2 line on Mac-like features
- Apple seeks to stop sale of Microsoft and H-P software that contain
features that distinguish Mac computers

- Apple is attacking an ally as well as a competitor (Microsoft)
- Microsoft sells the most popular software that runs on Mac
- Microsoft is also developing Presentation Manager to run on OS/2
- Presentation Manager is based on Microsoft Windows
- Earlier versions of Windows received a "limited license" from Apple
- Apple has not approved the latest release of Windows version 2.03
(Infringes Apple's copyrighted "audio visual display")
- Apple is saying between-the-lines:
"watch out IBM"
- Apple suing H-P over the New Wave program (works on IBM compatibles)

Local commentary:
- Russo, lawyer in software copyright:
. Look and feel of Mac is certainly legally protected
. Doesn't mean similar products will be found to violate copyright
. Apple has a copyright on display; question is to what degree
- Winer, software engineer with Symantec
. Relationship between IBM and Apple has always been strained
. Apple views Microsoft as a bigger competitor than IBM

--
virtualization experience starting Jan1968, online at home since Mar1970

I thought you might be interested in some things that have been
happening here the past few weeks. A project has begun to work on
AS/400-based development tools... that is, tools for development
projects and the tools will exploit the AS/400 database capability.
The project is also geared to exploit Andrew. The project is just
getting underway, so there is much that can happen, but thus far it
appears the project will have several related consequences:

a) TCPIP will be offered on AS/400
b) SUN's rpc will be offered on AS/400
c) TELNET and FTP are in plan
d) X client is being looked at
e) Andrew will become the workstation of choice in Rochester. We
already have about 100 workstations. The group I am in (doing this
project) has ordered another 50.
f) By year end 1990 they project 500 Andrew workstations in use in
Rochester
g) MODULA-2 will be the language of choice for most of the work done
in Rochester - this includes system code (with optimizations I
expect).
h) Software Engineering concepts will be applied for future work.
Obviously this is an evolutionary process. A group has already
accepted the responsibility of doing the ground work for abstracting
AS/400 constructs for use with MODULA-2.
i) BSD Unix will be used until Release 3 of AIX. At that time we will
likely change to AIX based on 386s (or whatever seems reasonable at
the time).
j) The development tools to run on the AS/400 will tie closely with
Andrew workstations. The user interface will exploit the Andrew
toolkit. Whenever OS/2 Presentation Manager and ADE and other
"stategic" products are available AND PROVIDE AS MUCH CAPABILITY AS WE
HAVE AT THAT TIME, we will migrate to the new world. (Capitalized
phrase is the one being used by Rochester management).
k) The engineering groups have decided to use Andrew - they have 20 (?) on
order and will begin whatever it is they do using Andrew as a base.

While much of this has been talked about for many months, it seems
there is some real commitment, as real money is being spent. Momentum
is building. Things can still change, so I will not bet my paycheck
on it, but clearly there is change in the air. I'll bet that by late
this year, it will be very clear just how far this is going to go.

There have been discussions about putting both Andrew File System
client and server on AS/400. It is a bit early for this now, but it
is under consideration. Given that some Raleigh folks have AFS client
and server working under OS/2 (using LU 6.2), it is probably very
doable for AS/400. This work might come as a clean up of the AS/400
database (which some have been counceling people to do for several
years). If AFS should be put on AS/400, this would provide AFS file
servers for the Andrew world.

After seeing how well AFS works here and how it solves so many
problems, I was very surprised to hear that Austin uses mostly
stand-alone machines and do not exploit even DS. It would seem to me
(not being there to really know) that a single-system-image that AFS
provides (even better than NFS) would be an enoumous asset to Austin.
If you were running an AFS cell, we could gain access to fixes, new
releases etc directly from Austin. The kind of cooperation between
sites that is possible with AFS cells seems like a technology that can
really make a difference for IBM. We'd be very pleased to show off
this world if you can swing a trip to the northland (bring your long
wooley underway though, winter is a'coming).

note that in the above there is mention of AIX on 386. there were a
number of different AIXes

outside company had done AT&T unix port to IBM/PC. When the
801/romp follow-on to the Displaywriter was killed in Austin, it was
decided to turn it into a Unix workstation. The 801/romp software was
CP.r operating system implemented in PL.8. For unix workstation, the
strategy was to have austin implement a "abstract virtual
machine later" (in PL.8) and have that company that did PC/IX do
port of AT&T unix to the "abstract virtual machine
layer". This was released as unix workstation PC/RT with AIXV2. The
justification was that since the Austin people already knew the
hardware, they could provide a simplified interface for the unix port
... and the whole thing could be done faster and less resources than
than having the company do a port to the bare hardware.

the palo alto group had been working on a (UCB) BSD port to 370 but
then were redirected to port to PC/RT (bare hardware) instead. It
turns out this was done with less effort than either the PL.8 abstract
virtual machine or the AIXV2 port (it was released on the PC/RT as
"AOS").

palo alto was also working with UCLA on LOCUS. Ports of Locus was done
to a number of platforms and a combined product was eventually done
that was released as AIX/370 and AIX/386.

Meanwhile, Austin was working on the RIOS followon to ROMP (released
as RS/6000) and AIXV3 (eliminating the abstract virtual machine and
merging in some BSD features).

"Andrew" was from CMU (distributed filesystem, widgets, etc) as well
as MACH (another UNIX-work alike, similar to UCLA LOCUS, but
different, used by NeXT and later Apple) and Camelot (transaction
processing).

IBM and DEC had jointly back MIT Athena to the tune of $25M each
... which produced X-windows, Kerberos, and some number of other things.

IBM had backed CMU Andrew to the tune of $50m. Later IBM provided
initial/seed funding for Camelot spinoff from CMU as Transarc ... and
then later still, bought Transarc outright.

--
virtualization experience starting Jan1968, online at home since Mar1970

Walter Bushell <proto@panix.com> writes:
The bubble should have been called much earlier, but there was too much
money to be made in denying that a bubble was happening. And then there
were those who had religious belief in the market.

numerous writeups just have a huge number with very large vested
interest in keeping it going ... somewhat the theme of the "Why Isn't
Wall Street in Jail?" and several other articles ... think of the bubble
akin to a form of ponzi scheme with significant portion of manhatten
playing in it.

related are the reports about wallstreet bonuses spiked over 400% during
the bubble (with large efforts to keep them from returning to pre-bubble
levels) and their sector tripled in size (as percent of GDP) during the
bubble. "religious belief in the market" ... can also be construed as a
facade to keep the marks coming in.

during the height of the bubble there was references to "musical chair"
analogy and wondering which institutions would be holding the toxic
wastes when the music stopped.

and in the case of the person in 2003 pointing out that immediately
securitizing and selling off loans eliminates any reason for the loan
orginators to care about loan quality and/or borrower qualifications
... the street really slammed him for daring to expose a major component
of the mechanism.

--
virtualization experience starting Jan1968, online at home since Mar1970

The first personal computer (PC)

Andrew Swallow <am.swallow@btopenworld.com> writes:
The USA and Britain went to war with Germany twice in the 20 century
so it is not their favourite country (for good things). A right
winger like Bismark is hated by the left wing so they are not going to
advertise him. So the origins are ignored.

I was at dinner in Los Gatos with CTO and some of his staff from a major
Japanese industrial company and mentioned that my wife had lived in the
far east when she was a girl. When asked where, I said Nanking ... and
nearly all the faces turned pale and conversation stopped for a moment.

but we didn't "cold war" him for the genocide ... and nearly all of the
cold war went on long after he was no longer on the scene.

as to germany ... before my wife's dad was posted to MAGIC ... he had
an engineering combat group in germany ... at the end, typically out
in front of the tanks (roads & bridges) ... had a collection of
officer daggers (from surrenders; until most of the ww2 stuff was
stolen a few yrs ago) and liberated some camps. speculation was that
the liberation of camps contributed to him not wanting to stay in
germany after the war.

Last year, I found some of his group's status reports in the National
Archives ... from one:
On 28 Apr we were put in D/S of the 13th Armd and 80th Inf Divs and
G/S Corps Opns. The night of the 28-29 April we cross the DANUBE River
and the next day we set-up our OP in SCHLOSS PUCHHOF (vic PUCHOFF); an
extensive structure remarkable for the depth of its carpets, the
height of its rooms, the profusion of its game, the superiority of its
plumbing and the fact that it had been owned by the original financial
backer of the NAZIS, Fritz Thyssen. Herr Thyssen was not at home.

Forward from the DANUBE the enemy had been very active, and an intact
bridge was never seen except by air reconnaissance. Maintenance of
roads and bypasses went on and 29 April we began constructing 835' of
M-2 Tdwy Br, plus a plank road approach over the ISAR River at
PLATTLING. Construction was completed at 1900 on the 30th. For the
month of April we had suffered no casualties of any kind and Die
Gotterdamerung was falling, the last days of the once mighty
WHERMACHT.

... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

long ago and far away, my wife had been con'ed into going to POK to be
in charge of (mainframe) loosely-coupled (cluster) architecture, where
she created peer-coupled shared data architecture ... some past posts
http://www.garlic.com/~lynn/submain.html#shareddata

there was very little uptake, except for IMS hotstandby, until sysplex
(and parallel sysplex). there were also skirmishes with communication
group over use of SNA. There would be temporary truces allowing she
could use anything within the wall of the datacenter ... but
communication group "owned" everything that crossed datacenter walls.

in the early 80s, communication group provided big boost to emerging
pc market with 3270 terminal emulation, for which they eventually had
a large install base.

in the late 80s, client/server was moving past terminal emulation
paradigm and the communication group was doing lots of stuff trying to
protect their install base (in addition to all the mis-information
regarding SNA applicability for various things).

during the period (in addition to nsfnet backbone & supercomputer
centers stuff), we had come up with 3-tier network architecture, it
had been written into large federal secure campus network response and
we were out pitching to corporate executives ... taking lots of hits
from communication group (given they were trying to suppress even
2-tier) and/or T/R forces. misc. past posts:
http://www.garlic.com/~lynn/subnetwork.html#3tier

Part of the above was significant superiority of enet over 16mbit
t/r. The new almaden research bldg had extensive wiring ... including
cat5 for t/r. However, they fairly quickly found out that enet over
cat5 had better aggregate LAN thruput and lower latency than 16mbit
t/r. Somebody from Dallas E&S had done a paper comparing 16mbit t/r &
enet ... but as close as I could tell, it was using early 3mbit t/r
before cdma standard. '88 acm sigcomm had some studies showing worst
case enet was 8.5mbit sustain effective thruput with 30-40 stations in
low-level device driver loop constantly broadcasting minimum sized
packets (much better than 16mbit t/r).

The other part was terminal emulation paradigm design point had 16mbit
t/r per card thruput based on 300+ stations sharing the bandwidth. The
RS6000 group had been ordered they could only use PS2 cards (and not
do their own, not just t/r, but everything). The PS2 16mbit t/r card
had lower per card thruput than their custom 4mbit t/r card done for
the PC/RT.

from IBM jargon:
TOOLS disk - n. A disk of shared data (especially of programs or
computer conferences) that is maintained automatically by the TOOLS
and TOOLSRUN programs. TOOLS was created in 1981, and now maintains
tens of thousands of disks of data in IBM, mostly shared and copied
across VNET (q.v.).

... snip ...

somewhat in the wake of the various taskforces that investigated me
for online computer conferencing.

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe Hall of Frame. List of influential mainframers thoughout history

one of the people mentioned in this jan92 meeting in Ellison's
conference room claimed to have handled the majority of tech transfer
of SQL/DS back to STL for DB2
http://www.garlic.com/~lynn/95.html#13

quote from above:
The surprise of the MVS project was that it happened faster than I
thought it would. In other words, Plan A collapsed, all right? Eagle
collapsed, and all of a sudden, everyone turned to us and said, "OK,
when can you ship this database product?" [laughter] And that's when
we had to make some fairly hasty, difficult decisions on ...

... snip ...

rather than Eagle evolves into DB2, after Eagle collapses, the
System/R group is asked if they could do System/R for MVS (aka
DB2). At least part of that involves transfer of SQL/DS technology
back to STL (according to Oracle executive that had been in STL and
said he was responsible for the work).

--
virtualization experience starting Jan1968, online at home since Mar1970

Peter Flass <Peter_Flass@Yahoo.com> writes:
It became ACP and then TPF. I believe it's now used for a lot of other
high-volume simple-transaction systems like credit card processing.

Someone was nice enough to send me the reference for "Sabretalk", a
PL/I-derived language used for writing transactions for the Sabre system:
(http://home.roadrunner.com/~pflass/PLI/Sabretalk_Reference_Guide.pdf)

I think it's still used, but apparently is now being replaced by C.
Information on Sabre/TPF/ACP is a little sparse on the web.

somewhere around 1980, it was renamed TPF because it was being used for
things other than the airlines.

It was ACP that was the example that put the nail in FS coffin, the
example of FS machine built from fastest available technology (370/195)
would run software the thruput of 370/145 (30 times slowdown) was ACP
... at the time Eastern was running it on 370/195 ... and it was
projected that running ACP on FS machine made from the exact same 195
circuits ... the thruput would drop to that of running ACP on 370/145.
http://www.garlic.com/~lynn/2011c.html#17 If IBM Hadn't Bet the Company

3081k doubled processor cache size of 3081d given possibly 1/4th-1/3rd
improvement (for things that had high cache miss ratio) ... TPF
improving from 20% slower than 3033 (on 3081d) to about the same as 3033
(on 3081). Part of the issue was that TPF didn't have multiprocessor
support and so only ran using single CPU. 308x was never designed to
have single processor machine. Eventually they did come out with 3083
(single processor) ... which was 3081 with one processor removed (the
problem was everything was arrainged with processor1 in the middle of
the cabinet made the cabinet dangerously top-heavy, things had to be
configured to move processor0 from the top of the cabinet to the
middle). more discussion in this recent post
http://www.garlic.com/~lynn/2011b.html#49 VM/370 3081

Later, my wife did short stint as chief architect for Amadeus (taking
the Eastern reservation system as basis for new "european" reservation
system). She had come down of the side selecting x.25 for the
communication interface ... which had the SNA forces getting her
removed. It didn't do much good since Amadeus went with x.25 anyway.
recent refs
http://www.garlic.com/~lynn/2011.html#17 Looking for a real Fortran-66 compatible PC compiler (CP/M or DOS or Windows, doesn't matter)
http://www.garlic.com/~lynn/2011.html#41 Looking for a real Fortran-66 compatible PC compiler (CP/M or DOS or Windows, doesn't matter)

Then in the mid-90s, I was asked into the largest such airline
reservation system ... looking at some of the problems of features
... they had ten "impossible" things ... somewhat related to ACP/TPF
being such fast (simple) implementation ... that it made it difficult to
do various things. Lots of complex data management was performed on MVS
platform running DB2 ... and then periodically the reservation system
was shutdown and TPF data infrastructures rebuilt from the DB2 copy.
This rebuild time frequently ran approx. a shift ... and was becoming
more & more difficult as service went global and was expected to be
7x24. Old post about offspring with parttime job in college at air
freight company ... which used airline res systems ... and it
wasn't unusual for sunday rebuild downtime to overrun into 1st
shift monday morning:
http://www.garlic.com/~lynn/2009o.html#42 Outsourcing your Computer Center to IBM ?

"routes" represented about 25% of the processor use ... and I redid
"routes" (complete rewrite on different platform and using totally
different approach). One of the issues was being able to drastically
increase scaleup to handle all passengers for all flts in the world.
Rough estimate that rewrite to handle that load with something like ten
RS6000/580s. This is past refs about the rewrite mentioning xscale
processor in smartphone has approximately that total processor MIP rate
(ten rs6000/580s):
http://www.garlic.com/~lynn/2010b.html#79 Happy DEC-10 Day
http://www.garlic.com/~lynn/2010e.html#53 z9 / z10 instruction speed(s)
http://www.garlic.com/~lynn/2010j.html#52 Article says mainframe most cost-efficient platform

... as well as significant retrenchment of its activity at institutions
of higher learning (helping groom the future generation of computer
professionals).

along with the lifting of gov. restrictions in the 80s ... also saw
resurgance of univerisity activity ... and the formation of "ACIS"
with initial pool of $300M that was to be distributed to institutions
of higher learning.

The first personal computer (PC)

"Rod Speed" <rod.speed.aaa@gmail.com> writes:
And later than that the witch obsession ended up so bad that
some of the town squares ended up with a considerable layer
of human fat on the houses on the downwind side of the square,
the result of burning so many people at the stake in the square.

my wife tells a story of uncle receiving solicitation for donation to
salem witch burning memorial ... and writing back that the family had
already contributed sufficiently (witches burned)

--
virtualization experience starting Jan1968, online at home since Mar1970

lots of them are near reliable supply of water. somewhat the sameplace
that the mega datacenters have been going in (which need both the water
as well as the power from the plants). water is also used for
hydro-electric dams as well as the nuclear power plants.

above mentions coal from power river basin in wyoming. the coal plants
i've seen in the middle of nowhere in wyoming ... are at some water
source (easier to transport coal to the water than water to the coal)

The first personal computer (PC)

"Rod Speed" <rod.speed.aaa@gmail.com> writes:
Our hydro is basically mostly a pumped water system that does
load balancing for the entire eastern grid as well as true hydro.
Thats where most of the coal is too.
https://en.wikipedia.org/wiki/Snowy_Mountains_Scheme

grand coulee dam originally went in for flood control & irrigation
(originally targeted for million acres) ... pumping water up into the
"grand coulee" to flow down towards the basin ... but also provided
electricity. later the 3rd power house was added as well as "reversable"
pumps .... using excess electricity to "over" pump water into grand
coulee during off-peak ... and then having the water flow back down
during peak (312 mwatts):
https://en.wikipedia.org/wiki/Grand_Coulee_Dam

If IBM Hadn't Bet the Company

Charles Richmond <frizzle@tx.rr.com> writes:
Sometimes the better part of valor is just to keep your mouth shut!!!
:-) I learned that the hard way, like I learn most things I
guess. Sometimes I thought I was the only one seeing a certain thing,
and I would point the thing out to everyone. Then I found out that
*everyone* saw it, but had the good sense to keep quiet about it.
:-(

somewhat along the lines about the corporation became one of
sycophancy and make no waves with the demise of FS ... lots of
higher level technical promotions and awards became extremely
sensitive to corporate politics (even if I hadn't otherwise, already
managed to offend various parties).
http://www.garlic.com/~lynn/2001f.html#33 IBM's "VM for the PC" c.1984?

as mentioned in some of the above ... when POK originally contacted me
about the MVS 15min MTBF ... I had initially thot it was to discuss how
to fix the problems, but it turned out they wanted to make sure I never
mentioned it again (and appropriately punished and if possible, gotten
rid of) ... perception was much more important than substance; again
Boyd's career choice To Be or To Do.
http://www.garlic.com/~lynn/2000e.html#35 War, Chaos, & Business (web site), or Col John Boyd

bldgs. 14&15 still show in plant site satellite photo (many others
having been torn down).

later, getting close to 3380 first-customer-ship, product test ran
(injected typical) error regression bucket against MVS ... and found
that MVS system failed in all cases (requiring reboot/re-ipl) and 2/3rds
of the cases, there was no indication of what caused the failure ... by
then, there wasn't much reason to not send out note about MVS failing
error regression tests ... old email in this post:
http://www.garlic.com/~lynn/2007.html#2 The Elements of Programming Style

Itanium at ISSCC

Robert Myers <rbmyersusa@gmail.com> writes:
The Air Force (or the CIA) has been crashing UAV's at such an alarming
rate that they have finally begun subjecting them to the same kind of
testing that human-occupied aircraft are subjected to. I wouldn't
credit the Air Force with an excess of caution or foresight.

from the above:
A senior Pentagon official has delivered a stinging attack on the US Air
Force, saying that its philosophy of using fully qualified human pilots
to handle unmanned aircraft at all times has resulted in unnecessary,
expensive crashes. By contrast, US Army drones with auto-landing
equipment and cheaply-trained operators have an enviable record

... snip ...

... and ...
The US Army has a differing philosophy: it's "Sky Warrior" variant of
the Predator is intended to land itself automatically, and the
present-day Shadow has such kit already. Army drones are controlled by
noncomissioned tech specialists who, while fully trained and qualified
for their job, have no airborne stick time in regular aircraft. They are
always in theatre with the rest of the troops.

The first personal computer (PC)

Anne & Lynn Wheeler <lynn@garlic.com> writes:
numerous writeups just have a huge number with very large vested
interest in keeping it going ... somewhat the theme of the "Why Isn't
Wall Street in Jail?" and several other articles ... think of the bubble
akin to a form of ponzi scheme with significant portion of manhatten
playing in it.

from above:
We could have jailed 'just one' of them back then, when they were down
for the count. Instead, we bailed them out! Made them richer. Gave them
$13.7 trillion, loans, credits, cash, asset buyouts. Gave them keys to
the Treasury. They didn't just recover, they 'ran the tables,' to use a
blackjack/pool metaphor. Now Wall Street dictators have absolute power,
ruling Washington, America, you and me.

... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

ps2os2@YAHOO.COM (Ed Gould) writes:
I was reading some article today about IBM & DB2 today. I think it
said something like DB2 was IBM's real first try into relational
databases. My memory is foggy here something in the back of my mind
says that is not quite correct. Back in the 70's (?) I vaguely
remember IBM having a FDP(?) that claimed to do relational database.
By slim memory says it may have been VM based. I do remember it had a
4 page white sales type paper(IUP?). No name comes up. Can anyone
supply me with a product name? I do recall something like this as we
were looking at a product and the show stopper was that it needed VM.

system/r ... san jose research, bldg. 28. work was for vm/cms on the
group's 370/145.

quote from above:
The surprise of the MVS project was that it happened faster than I
thought it would. In other words, Plan A collapsed, all right? Eagle
collapsed, and all of a sudden, everyone turned to us and said, "OK,
when can you ship this database product?" [laughter] And that's when
we had to make some fairly hasty, difficult decisions on ...

as referenced above ... the "official" DBMS effort in STL was called
"EAGLE", but when that crashed ... then the system/r group was asked how
fast could a RDBMS be turned out for MVS. recent post/mention
in (linkedin) Greater IBM
http://www.garlic.com/~lynn/2011d.html#42 Mainframe Hall of Frame. List of influential mainframers thoughout history

trivia ... multics was on 5th flr of 545 tech sq. The science center
that did virtual machines, cp67/cms, etc ... was on 4th flr of 545 tech
sq. When the cp67 group split off from the science center, they took
over the Boston Programming Center on the 3rd flr (morphing into the
vm370 development group). The development group outgrew the space on the
3rd flr and moved out to the empty SBC bldg. (vacated in the legal actions
where IBM transferred SBC to CDC) in Burlington Mall. misc. past posts
mentioning 545 tech. sq
http://www.garlic.com/~lynn/subtopic.html#545tech

there was mad rush to get products back into the 370 (hardware &
software) product pipeline ... having been killed off during the FS
period. Part of that was the head of POK managed to convince the
corporation to kill-off VM370 Burlington Mall development group, becuase
he needed to transfer all the people to POK for MVS/XA development (or
otherwise he couldn't meet the ship schedule). Endicott managed to save
the vm370 product mission, but had to recreate a development group from
scratch.

--
virtualization experience starting Jan1968, online at home since Mar1970

3270 Terminal

from ibm jargon:
bad response - n. A delay in the response time to a trivial request of
a computer that is longer than two tenths of one second. In the 1970s,
IBM 3277 display terminals attached to quite small System/360 machines
could service up to 19 interruptions every second from a user I
measured it myself. Today, this kind of response time is considered
impossible or unachievable, even though work by Doherty, Thadhani, and
others has shown that human productivity and satisfaction are almost
linearly inversely proportional to computer response time. It is hoped
(but not expected) that the definition of Bad Response will drop below
one tenth of a second by 1990.

above mentions large research institution on the east coast being
quite proud of having .25sec avg. response ... supposedly much better
than research institution on the west coast with similar workload and
hardware that only had avg. response of .11sec.

above also mentions TSO response was so bat that 3278/3274 difference
wasn't noticed by TSO users.

3278/3274 moved lots of electronics from terminal back into the
controller (compared to 3277/3272, reducing manufacturing costs)
resulting in worse performance (lots more protocol chatter over the
coax). this shows up later in 3278 terminal emulation having much
lower throughput than 3277 terminal emulation.

complaints to organization responsible for 3278/3274 eventually
resulted in the response that 3278 were designed for data entry, not
interactive computing.

as mentioned, the numbers are only for local channel attach
controllers ... remote 3270 and SNA introduces significant more
delays.

--
virtualization experience starting Jan1968, online at home since Mar1970

The surprise of the MVS project was that it happened faster than I
thought it would. In other words, Plan A collapsed, all right? Eagle
collapsed, and all of a sudden, everyone turned to us and said, "OK,
when can you ship this database product?" [laughter] And that's when
we had to make some fairly hasty, difficult decisions on ...

above mentions that marketing quy was looking at a poster for the
original Santa Teresa lab announcement ... with an eagle soaring above
the building ... and decided on EAGLE for the grand MVS DBMS effort.

I was in DC with offspring for vacation the week before the Air &
Space museum opened (*AND* also the week before STL was to be
opened). At that time, STL was going to be called Coyote lab (the
closest post office and the name of the valley). That week a working
ladies organization called "Coyote" was demonstrating on the steps of
the capital (and getting lots of press) ... which appeared to prompt
quick revision of the lab's name from Coyote to Santa Teresa (nearby
cross-road, lab has since been renamed Silicon Valley lab).

--
virtualization experience starting Jan1968, online at home since Mar1970

HMerritt@JACKHENRY.COM (Hal Merritt) writes:
I seem to recall working on a product called SLR (Service Level
Reporter). My (very poor) memory is of databases that looked a lot
like those later introduced by DB2.

dating back before sql (originally on vm370) were some 4th generation
languages that were offered by virtual machine based commercial service
bureaus (initially late 60s, cp67 and later vm370) ... RAMIS, NOMAD,
FOCUS (in some cases developed as part of competition between different
virtual machine based commercial service bureaus)

above also mentions that a spinoff from INGRES project was Britton-Lee
... including Bob Epstien as CTO. When Bob left for Teradata (and then
later founded Sybase), there was lots of recruiting going on around
bldg28/SJR (usually across the street from the plant site) for
replacement for Bob. Of course not nearly on the scale of Shugart
recruiting disk engineers
http://www.businessweek.com/archives/1992/b329273.arc.htm
http://www.mdhc.scu.edu/100th/Progress/Shugart/shugart.html

DB2 was rather late RDBMS to ship ... largely because EAGLE was the
MVS strategic DBMS ... and it was only after EAGLE effort crashed was
there the rush to get System/R (and SQL/DS) over to MVS for DB2. DB2
1983:
https://en.wikipedia.org/wiki/IBM_DB2

note that in 1989 ... there was work on totally different DB2
... targeted for OS2.

"Dave Wade" <dave.g4ugm@gmail.com> writes:
If by "advanced" you mean "tricky to install and tempermental to
configure" then it was more advanced than Win95. Despite being called
"connect" OS2/Warp Connect was lacking in LAN TCP/IP. To get that cost
more than the base o/s. It was still embedded in the "Mainframe" world
of software pricing...

aka there was lot of OS2 & PS2 still being targeted for corporate
customers.

at the time of spring '96 m'soft MDC at Moscone ... m'soft had hired
some number of internet "old-timers" ... who were at the show ... big
move from closed (small, safe, corporate) LAN networks ... to Internet
... although the constant subtheme during the show was "protect your
investment" ... basically all the BASIC-based (& visual basic)
paradigm would continued to be supported in the Internet environment
(resulting in big uptick in various kinds of exploits related to
compromising those scripts).

--
virtualization experience starting Jan1968, online at home since Mar1970

from above:
It wasn't the first personal computer. Nor was it the most advanced. But
shortly after the IBM (R) Personal Computer arrived in 1981, it became
the leading platform in the revolution that brought computing out of the
glass house and into daily life

earlier, cambridge science center had ported apl\360 to cp67 cms for
cms\apl. there was work to adapt the apl storage management to a
(large) virtual paged environment. cms\apl also provided interfaces to
cms system services ... including file i/o. this allowed some large,
"real world" applications to be done ... one such was that the
business people in corporate hdqtrs (armonk) loaded the most holy of
corporate assets (customer details) on the cambridge 360/67 cp67
service and used the system remotely in armonk to do business planning
written in apl.

however, cambridge took a lot of heat from the apl "purests" claiming
that the implementation of cms system function had violated apl
paradigm. supposedly "shared variable" was replacement that implemented
access to system services consistent with apl paradigm.

from above (QBE being in the field as an IUP)
And there were people in the field, and they loved it. They had stories
of tape librarians who'd automated their tape library with it, and Gene
Trivett was going around and fixing some of the performance problems,
and it was popping up all over the planet. So it had a very loyal
following. It was obvious to everybody that this did something
wonderful. That this was an end-user program. So then the question
became, "So why don't we cancel System R?" or "Why don't we grow this
thing?"

... snip ...

a little later on in above:
I don't have the exact date, but around 1978, right? When did the actual
shoot-out occur? 1978? Gomory asked Dick Case to do a review of the
work. Dick Case included Ashok Chandra, who currently runs the Computer
Science Department - he's the latest version of Frank King - and one
other person, who were all disinterested people, but were technically
capable. They went to Yorktown and learned all about QBE, and then they
came to San Jose to learn all about System R, and I gave them my long
lecture about how the lock manager works and how Compare-and-Swap could
do locking, and we did it all right, and we knew how to do
Compare-and-Swap-Double. Dick Case was really impressed, because he's
probably the architect of Compare-and-Swap.

... snip ...

as I've posted before, compare&swap was invented by charlie at the
cambridge science center working on fine-grain multiprocessor locking
for cp67 (compare-and-swap was chosen because "CAS" are charlie's
initials). we tried to get "CAS" into 370, but were rebuffed because the
POK favorite son operating system people claimed that test&set was more
than adequate. The challenge given us by the owners of 370 architecture
was to come up with uses other than kernel multiprocessing. Thus was
born were the uses for application multithreaded operation, examples
that still appear in principles of operation. misc. past posts
mentioning multiprocessor support &/or compare&swaphttp://www.garlic.com/~lynn/subtopic.html#smp

above mentions a project at cambridge science center. one of the people
working on the project was the "L" ... in "GML" which was invented at CSC
in 1969. In the late '70s, "GML" morphs into ISO standard as "SGML"
... and then in the late '80s, "SGML" morphs into "HTML". misc. past
posts mentioning gml, sgml, html, etc
http://www.garlic.com/~lynn/submain.html#sgml

End of an era

Walter Bushell <proto@panix.com> writes:
In a way it's a relief, they should never have flown those things.
Supposedly reusable, they had to be rebuilt after each flight, mainly
because they cheapened the design after approval.

there was also the explosion. afterwards there was a parody written
about the "o-rings" ... the queen was lobbied by somebody in her court
that columbus' ships should be built in the mountains where the trees
were ... rather than lugging the timber to the harbor and building the
ships there. because the ships were built in the mountains, they had to
be cut into three sections for transport to the harbor and then glued
back together ... before sailing across the ocean to discover america.

--
virtualization experience starting Jan1968, online at home since Mar1970

the Omrud <usenet.omrud@gmail.com> writes:
The read/write heads were fairly large, and when the machine had to
swap some of its operating system out to the drive to make room for a
large program (48K words of memory in that machine), the heads would
fly back and forth with terrifying speed. You could see all this
happening because both the lid of the drive was made of clear perspex.
The flying heads set up a vibration in the drive and if you were
unlucky the drive would start to walk across the floor. The only way
to stop it was to damp the vibrations, and the simplest way to do that
was to sit on the drive.

comment was tcpip is the technology basis for the modern internet
... and nsfnet backbone was the operational basis for the modern
internet (and cix was the business basis for the modern
internet). Note that ARPANET was a homogeneous network (not
internetworking) suffering some of the same deficiencies of SNA/VTAM
where there would be outages to have synchronized updates of all the
IMPs (somewhat akin to large SNA customers of the period requiring
synchronized updates of all their 3705s)

bits and pieces of posts from decade ago discussing '82 (ibm) sjr
gateway into csnet ... and other old email about the great cutover to
internetworking protocol on 1jan83:
http://www.garlic.com/~lynn/internet.htm

The issue was the NSFNET backbone being the operational basis for the
modern internet ... wasn't that the backbone was so much of a network
in itself ... but it was internetworking lots of networks.

The NSFNET backbone had something of multiple purpose ... being billed
as both a high-speed network internerconnecting supercomputing centers
as well as backbone internetworking different networks. We had been
working with director of NSF back in the 84/85 period ... and several
of the sites that likely become NSFNET backbone nodes. The various
CSNET and other network nodes tended to be 56kbits. I had HSDT project
inside IBM with T1 & faster speed links ... and was pitching the same
for NSF. misc. past email mentioning HSDT
http://www.garlic.com/~lynn/lhwemail.html#hsdt

Things started getting wierd and participants were being called up and
being told the meetings were canceled. The director of NSF tried to
help by writing a letter to the corporation copying the CEO ... but
that just aggravated the internal politics. misc. old email mentioning
high speed activity
http://www.garlic.com/~lynn/lhwemail.html#nsfnet

calling for a "T1" backbone (from example that I already had
running). The winning bid actually put in 440kbit links ... then
possibly trying to meet the letter of the RFP put in T1 trunks with
telco multiplexors ... running multiple links per trunk. I
sarcastically mentioned that they could possibly have called it a "T5"
network since possibly some of their T1 trunks were in turn
multiplexed over even faster trunks.

Other wierd stuff was that there was a lot of mis-information being
spread around about the applicability of SNA/VTAM for the NSFNET
backbone. Somebody on the sidelines collected a large amount of the
email flowing on the topic and forwarded it to me ... partially
included here
http://www.garlic.com/~lynn/2006w.html#email870109

I have quotes from catenet and other documents (i started shadowing
all the documents when the server was still at sri ... and have
complete set of ien & other) in the referenced collection
http://www.garlic.com/~lynn/internet.htm

with piece of ARPANET newsletter figuring 100 nodes by 1983 (aka IMPs
as opposed to hosts).

in the 90s, Postel had me give a talk at ISI ... with bunch of people
invited from USC, on the difference between (internet) technology and
operation/operational (why "tcp/ip" wasn't business critical & some
compensating procedures) .. CATENET was ien-32 ... part included in
old apr99 post ... mentioned here
http://www.garlic.com/~lynn/internet.htm#5

One of the operational issues that took quite awhile to evolve was
monitoring, diagnostic, command&control.

BBN had quite a bit of stuff in the IMPs ... although in the early 80s
there were jokes about possible scaling issues & the ARPANET 56kbit
links periodically being saturated with IMP chatter about what they
were doing (every IMP telling every other IMP what was going on). That
didn't carry over into TCP/IP great switch-over on 1jan83.

Not all of what happened was strictly because of NSFNET backbone
... but was happening in approx the same time-frame. At Interop '88, I
had an PC/RT in a non-IBM booth. Sunday, before the show started, the
floor nets were crashing ... and diagnosing and coming up with work
around continued most of the night, up until nearly the opening of the
show on Monday. What was learned ... was included in RFC1122
(Requirements for Internet Hosts -- Communication Layers, still a
standard) published a year later (Oct89). The booth I was in was
located next to SUN booth & one of the things they were showcasing
included Case & SNMP ... which still hadn't won the
montoring/diagnostic wars. However, did get Case to come over and
load/build SNMP on the PC/RT. misc. past posts mentioning interop '88
http://www.garlic.com/~lynn/subnetwork.html#interop88

The IBM100 article mentions the transition from T1 (which was
something of fiction with what they actually put in) to T3. Possibly
hoping to quiet my sniping, I was asked to be the "red team" for the
T3 proposal ... and a couple dozen people from half dozen labs around
the world were the "blue team". At the final review, I presented first
followed by the blue team. 5-10 mins into the blue team presentation,
the person running the review pounded on the table and said he would
lay down in front of a garbage truck before he allowed any but the
blue team proposal to go forward. I got up and walked out ... along
with a few others.

--
virtualization experience starting Jan1968, online at home since Mar1970

The first personal computer (PC)

Peter Moylan <invalid@peter.pmoylan.org.invalid> writes:
Youngsters these days don't know what it was like back then. They'll
come up with singular data and think nothing of it.

there is recent ongoing thread over in comp.arch about applications
being able to do a (non-privileged) cache-line purge for range of
addresses. cp67/vm370 had something similar for doing (similar) "zeros
page" for range of virtual addresses ... back in the days when processor
real storage was smaller than today's processor caches.

--
virtualization experience starting Jan1968, online at home since Mar1970

early part of the century, original Basel2 draft included new section
for including qualitative measures (in addition to the quantitative
measures) for determining risk adjusted capital ... somewhat akin to
ISO9000 for senior bank executives (did they understand what they were
doing). during the review process, the section was almost completely
gutted.

The first personal computer (PC)

jmfbahciv <See.above@aol.com> writes:
<grin> right. Now read speedo's reply. I figured that's (electricity)
is what he would say. Lots of wiring would have to be done before any
of those coal trains could get here. and that's also assuming that
there is an army guarding all the copper.

there was article that Ford had been in electrical ... and defied
popular wisdom and switched to get into gasoline engines. Early
production just barely covered his costs and kept him in business. What
supposedly made the Model-T was "French Steel" (new process developed in
Europe that tripled the strength of steel) ... allowing the weight of
the vehicle to be cut significantly (which played major role given how
little power was available from gasoline engines of the period).

--
virtualization experience starting Jan1968, online at home since Mar1970

early part of the century, original Basel2 draft included new section
for including qualitative measures (in addition to the quantitative
measures) for determining risk adjusted capital ... somewhat akin to
ISO9000 for senior bank executives (did they understand what they were
doing). during the review process, the section was almost completely
gutted.

the Basel2 qualitative section would have introduced additional (risk)
criteria for adjusting "risk adjusted capital" ... nearly all of the
rest of the world were in favor of the additional provisions (some
amount of the original wordsmithing from the NYFED & Europe
institutions). it was primarily the too-big-to-fail US financial
institutions responsible (during the review process) for minimizing both
the risk criteria used for calculating risk adjusted capital ... as well
as minimizing the actual amount required.

--
virtualization experience starting Jan1968, online at home since Mar1970

from ibm jargon:
MVM - n. Multiple Virtual Memory. The original name for MVS (q.v.),
which fell foul of the fashion of changing memory to storage.

MVS - n. Multiple Virtual Storage, an alternate name for OS/VS2
(Release 2), and hence a direct descendent of OS. OS/VS2 (Release 1)
was in fact the last release of OS MVT, to which paging had been
added; it was known by some as SVS (Single Virtual Storage). MVS is
one of the big two operating systems for System/370 computers (the
other being VM (q.v.)). n. Man Versus System.

being able to treat everything as (virtual) address space ... which then
shows up later in S/38 (and as/400 followon). It somewhat came from
TSS/360 (the "other" operating system for 360/67) and other virtual
memory systems from the 60s.

"single pool" of data was scaling issue for s/38 ... since there was
scatter allocation across all disks ... the whole infrastructure had to
be backedup and restored as single entity ... single disk failure
required restoring the whole infrastructure. this problem was possibly
major motivation for s/38 being early RAID adopter.
https://en.wikipedia.org/wiki/System/38

Problems that TSS/360 had scaling its memory mapped filesystem
... didn't appear to have been corrected/improved with Future
System. The FS convoluted hardware execution contributed to the analysis
that applications running on FS machine built with fastest available
technology (370/195) would have thruput of 370/145 (about factor of 30
times thruput hit). Combination of performance, extreme complexity, and
many (complex) areas not even being fully defined ... all contributed to
demise of FS.

which is descendent of KeyKOS. Precursor to KeyKOS was GNOSIS
developed for 370 at Tymshare. When MD bought Tymshare, I was brought
in to evaluate GNOSIS as part of its spinning off to Key Logic.
http://cap-lore.com/CapTheory/upenn/

above mentions person having done a stint at HaL ... which was turning
out 64bit sparc system. The "H" in HaL was my former manager at IBM
and the "L" in HaL was a former employee at SUN ... during the initial
formation of HaL, SUN objected strenuously to "L" being part of HaL.

the MVM/MVS was OS/VS2 Release 2 ... to differentiate it from OS/VS2
Release 1 "SVS" (single virtual storage) ... which was MVT layed out in
single (16mbyte) virtual address space ... somewhat similar to running
MVT in a 16mbyte virtual machine ... except MVT had some stub code that
handled the page faults and page I/O ... instead of having CP67 do it

I recently got some background history on what was to have been OS/VS2
Release 3 ... the operating system for FS. Note that MFT transition to
virtual memory was called OS/VS1 (which was MFT typically laid out in
single 4mbyte virtual address space).

The biggest code for MVT to virtual memory, had to do with I/O
operations. Traditional MVT paradigm had applications (and/or libraries
called by applications, aka bsam, qsam, bdam, etc) build channel
programs in application space and then invoked "EXCP/SVC0" system
call. channel programs used "real addresses" ... the issue for SVS
(and/or cp67 with virtual machines) was that the passed channel programs
now all had virtual addresses.

In CP67, the virtual machine channel programs were processed by
"CCWTRANS" which scanned the virtual machine channel programs
... creating a duplicate ... replacing the virtual addresses with real
addresses. The initial pass for making MVT support its own single
16mbyte virtual address space (for "SVS") was to "borrow" CP67's
CCWTRANS ... and craft it into MVT's EXCP processing (I remember being
in POK machine room offshift for some long forgotten reason, and Ludlow
was busily getting MVT running with its own virtual memory support on
360/67 ... initial work for SVS).

There was big challenge from "SVS" to "MVS", OS/360 had a heavy "pointer
passing" API paradigm. The result started out with each application
getting its own 16mbyte virtual address space ... but an 8mbyte image of
the os/360 kernel occupied every application 16mbyte virtual address.

The other major problem was MVT had a lot of "subsystems" that sat
outside the kernel. An application would generate a subsystem call, pass
through the kernel and show up in the subsystems ... the passed
application pointer could be used by the subsystem (because they were
all in single address space). In transition to MVS, all applications got
their own virtual address space ... but so did all these "subsystems" ..
which no longer had access to the application address spaces.

The solution to this was the "common segment" ... an area that was
common to every virtual address space (analogous to the 8mbyte kernel
image), applications could acquire a dedicated part of the "common
segment" ... stuff its parameters, call the subsystem ... and the passed
pointer was accessable to the subsystems. This area started out as
1mbyte, which along with the 8mbyte kernel area, left the application
only 7mbyte. However, as systems grew and subsystems were added ... the
common segment size had to grow. Prior to transition to 370/xa & 31bit
virtual addressing ... many large customer systems had 4-5mbyte "common
segment" ... threatening to grow to 5-6mbytes (in some cases leaving
2mbytes for application use).

As a temporary expedite, a small subset of 370/xa architecture was
retrofitted to 3033s call "dual-address" space. This provided new
instructions for semi-privileged "subsystems" which could be used to
access secondary virtual address space ... subsystem would be entered
with a secondary address space pointer of the calling application. This
required rewriting all the subsystems ... some of which still hasn't
been done ... so common segment support has continued up until the
current day.

For other drift ... charlie had invented compare&swap instruction when
he was doing work on cp67 fine-grain multiprocessor locking at the
science center (compare&swap was chosen because CAS are charlie's
initials). An attempt was made to get the instruction added to 370. This
was initially rebuffed because the POK favorite son operating system
(aka MVT) claimed the test&set instruction was more than sufficient. The
owners' of 370 architecture gave a challenge to come up with uses for
compare&swap other than multiprocessor locking (to justify
instruction inclusion in 370). Thus was born the uses for application
multiprogramming (multithreaded) ... examples that still survive in
current principles of operation. multiprogramming and multiprocessing
examples:
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/dz9zr003/A.6?DT=20040504121320

compare&swap semantics, in some form, has been picked up by many other
platforms and typically is heavily used by, at least, large DBMS
implementations (for multithreaded operations, whether or not running in
multiprocessor environment).

Anne & Lynn Wheeler <lynn@garlic.com> writes:
I recently got some background history on what was to have been OS/VS2
Release 3 ... the operating system for FS. Note that MFT transition to
virtual memory was called OS/VS1 (which was MFT typically laid out in
single 4mbyte virtual address space).

small piece from last summer (customer asked me if I could contact
people with more details regarding history in the period ... reply
was both to me and to the original requester):
Of course, the estimates for OS/VS were based on a misperception. The
Kingston estimate for OS/VS2 Release 1 (SVS) had an estimate for the
work needed for Release 2 (MVS), but it was couched as release 1 cost
plus a delta - in other words, the same cost as release 1 plus some
more. Since the Kingston resources were being redeployed to FS, that
meant that there weren't going to be enough people to do both. Since MVS
was supposed to be the glide path for FS (which would be OS/VS2 Release
3), this was unwelcome news. xxxxx and yyyyy modified the plan to reuse
some of the SVS resources plus people transitioning from OS/360. Bob
Evans did his part by cutting a year off the MVS development schedule.

reference about Ludlow offshift work (mentioned in previous note, I
had referenced being in the machine room 3rd shift talking to Ludlow):
Lynn is right about Ludlow. They installed a full duplex Model 67 in the
706 computing center (705 had a model room). Don worked out the channel
program translation techniques. He and bbbbb had a patent on real-time
channel program translation that probably worked, but had a lot of
moving parts. ccccc and I had an alternative proposal which was closed
because it was felt that the bbbbbb-Ludlow patent was all we
needed. Moot point, because we never built the hardware anyway. Don
spent every waking minute on the model 67 (which was configured as two
systems).

... snip ...

another piece:
Note to Lynn - I have always given zzzzz the credit for turning Bob
Evans around. For reasons unknown to me, the TSO group had the flip
charts and wallboard zzzzz used. The clincher was the ability to run 16
initiators simultaneously on a 1 megabyte system, taking advantage of
the fact that MVT normally used only 25% of the memory in a
partition. The resulting throughput gain (compared to real hardware) was
substantial enough to convince Bob. It helped that Tom Simpson and Bob
Crabtree had hosted an MFT II system TSS-Style and shown similar
performance gains. Of course, since CP67 was a pickup group they weren't
considered and we had the OS/VS adventure instead.

... snip ...

Note Simpson & Crabtree had done HASP. another piece:
HASP (Houston Automatic Spooling Priority System) was developed for the
Houston Manned Space Center by Tom Simpson, Bob Crabtree and a couple of
others who used their experience with the Moonlight (DCS) system. It
overcame some of the horrendous design decisions that crippled MFT (many
of these were fixed in MFT II). It was released as a type 3 program (I
still have one of the source tapes, long since changed to chocolate) and
turned the OS/360 program around. At the same time the West Region used
their experience with Moonlight to create ASP. We used the BSC-B release
when I was at Ohio State in 1969. Prior to that the RJE package was
awful, although they all were due to the STR protocol. Lynn's workaround
was one of the usable RJE packages - aaaaaa's team never did get it, and
produced some real stones.

... snip ...

As undergraduate in the 60s, I had ripped out the HASP 2780/STR support
in part to reduce HASP real storage footprint ... and replaced it with
2741&TTY terminal support along with editor (that mostly re-implemented
CMS editor syntax ... none of the code was re-useable since the CMS and
HASP environments were so different).

HASP support (& some people) move to g'burg for morphing HASP into
JES2. ASP support was also moved into the same g'burg group. My wife
did a stint in the group (after FS) ... including part of the
"catchers" for ASP to "JES3". She was co-author (along with person
that sent this recent note) for "JESUS" ... "JES Unified System"
... that included all the things from JES2 & JES3 that neither
customer camps could live without. For various reasons ... it never
happened ... and both JES2 & JES3 continue to exist today.

the "TSS-Style" thing was called "RASP". Simpson later leaves and
appears as a "Fellow" in Dallas working for Amdahl ... and redoing
"RASP" (in "clean room"). There was some legal action that attempted
to find any RASP code included in the new stuff being done at Amdahl.

more HASP stuff from the note:
To find SPOOL you'll have to get some old 7070 marketing material. The
problem being addressed was that the peripheral equipment (card
reader, punch, printer) were all 150 docs/minute devices, whilst
Univac and others were touting fast card readers (basically model 85
collators wired directly into the computer) and printers - 300
cards/minute and 600 lines/minute. The mainframes (704/709 and 705) us
off-line card to tape, tape to card, and tape to printer equipment
(with a ghastly wire printer that would print at 1000 lines a minute
for a couple of seconds or so). This was an unacceptable solution for
a mid-range system that was meant to replace the IBM 650. The solution
took advantage of the interrupt facility built into the 7070 - the off
line operations executed in the background while an application was
running, so the 7070 became a tape-in, tape-out system, just like its
big brothers, but without requiring so much extra equipment. This was
heavily touted until 1959, when the IBM 1401 was announced. Since the
1401 could do simultaneous card-tape, tape-card, and tape-print, had
the wonderful 1403 train printer and a fast card reader-punch (the
1402), and was incredibly cheap (less than the cost of the card
equipment for the 7070), SPOOL suddenly became a bad word, to the
dismay of the sales force.

... snip ...

My first student programming job was re-implementing 1401 "MPIO" in
360. Univ. had 709/1401 combination where operators manually moved
tapes between 709 & 1401. Univ. was on plan to replace whole thing
with 360/67 running tss/360. As part of transition, 1401 was replaced
with 360/30. The 360/30 could run 1401 hardware emulation and needed
no new software ... but for whatever reasons, I was paid to redo the
app in 360. I got to design & write my own monitor, device
drivers, storage management, interrupt handlers, task management,
error recovery, etc. The only criteria was card->tape was same tape
as created by 1401 ... and 709 tapes for tape->printer/punch was
the same. One of the things was being able to concurrently do
card->tape and tape->printer/punch operations.

Before doing the HASP 2741/TTY CRJE thing ... I had added TTY/ASCII
support to cp67 (which came with 2741 & 1052 support). The cp67
2741/1052 terminal support did automatic terminal type identification
(leveraging the 2702 SAD command being able to switch the line-scanner
type associated with port/line). I attempted to preserve the dynamic
terminal type identification ... and it worked for leased/direct
lines. However, the 2702 had taken a hardware shortcut and hardwired
the oscillator/line-speed to each port. My dreams of having a single
dial-up (hunt group) number of all dial-in terminals was blocked.

This was somewhat the motivation for the univ. to start clone
controller effort, reverse engineer channel interface ... and build
channel interface board for Interdata/3 programmed to emulate 2702
... but supporting both dynamic terminal type and dynamic line-speed
identification. Later four of us are written up being blamed for clone
controller business. misc. past posts
http://www.garlic.com/~lynn/subtopic.html#360pcm

--
virtualization experience starting Jan1968, online at home since Mar1970

Anne & Lynn Wheeler <lynn@garlic.com> writes:
HASP support (& some people) move to g'burg for morphing HASP into JES2.
ASP support was also moved into the same g'burg. My wife did a stint in
the group (after FS) ... inclduing part of the "catchers" for ASP to
"JES3". She was co-author (along with person that sent the note) for
"JESUS" ... "JES Unified System" ... that included all the things from
JES2 & JES3 that nether customer camps could live without. Various
reasons ... it never happened ... and both JES2 & JES3 continue to exist
today.

my wife was then con'ed into going to pok to be in charge of (mainframe)
loosely-coupled (cluster) architecutre. while there she did
peer-coupled shared data ... which saw little uptake (except for IMS
hot-standby) until sysplex (and parallel sysplex) ... contributing to
her not remaining long. misc. past posts mentioning peer-coupled
shared data architecture
http://www.garlic.com/~lynn/submain.html#shareddata

another factor was the skimishes with the communication group which
were demanding that she use SNA for loosely-coupled operation
... there would be temporary truces where she would be allowed to use
whatever she wanted (but the communication group "owned" everything
that crossed the wall of the datacenter).

decade later, she does short stint as chief architect for Amadeus
... but doesn't last long ... the communication group gets her
removed when she backs the use of x.25 (over sna); it didn't do them
much good, Amadeus goes with x.25 anyway.

long ago and far away ... i was at a annual get-together ... that JOLT
had selected for initial betatest ... they trucked in large number of
cases of the stuff for the gathering. fading memory, I can't remember if
that was the same year 60minutes was allowed in to do a segment (after
several weeks of negotiations that they wouldn't do a hack job, they
then proceeded to do everything they had promised not to do).

--
virtualization experience starting Jan1968, online at home since Mar1970

fyi, next thursday (7pm?) is the local unix user group meeting. I'll
be giving an informal trip report on Interop '88 (two weeks ago in
santa clara) and Hacker's 4.0 (this last week end ... contrary to CBS,
Hacker's is not a revolutionary army in the santa cruz hills).

End of an era

Ahem A Rivet's Shot <steveo@eircom.net> writes:
They didn't make it past 1990 and only two were launched in
configurations with lower payload than a Saturn V, but yes something like
the final planned configuration would have been a nice tool if it could be
made.

first discovery mission (41-d) went up with sbs-4 ... which had its own
booster, after release from discovery, the booster took it up to
geosynchronous

Note also the executive that was predicting in the mid-80s that sales
would double (to $120B ... see upthread about fast track) and there
was massive bldg. program to double manufacturing capacity (however,
at the time, it was relatively trivial to show that the business was
moving in the opposite direction).

I think somebody could write a book on all the tricks done for pumping
up executive bonuses.

Manage-to-the-bonus became wide spread in American culture and tricks
that misaligned MTTB with the business were epidemic. I've written
several times in the wake of sarbanes-oxley that GAO (apparently
believing SEC was doing little or nothing) started doing audits of
public company financial filings showing uptick in fraudulent filings
(apparently to boost executive bonuses) ... and even if the filings
were later revised ... the bonuses weren't adjusted.

my first programming job as undergraduate was doing my own
mini-monitor for 360/30 (replaced 1401 "MPIO"). Afer a couple other
things, the univ. gave me responsible for production operating system
... I got to do sysgens, maintenance, etc.

later at IBM, career choice To Be or To Do ... from dedication of
Boyd Hall at Air Force Weapons School, also mentioned recently with
regard to IBM Jargon defintion for fast track. I had sponsored
Boyd's briefings at IBM. The first one, I attempted through employee
education, which they initially agreed. However, after leaning more
about the content of the talk they changed their mind, recommending
that audience should be restricted to competitive analysis depts
only. (saying IBM spends a great deal of money on training managers to
handle employees and having general employees attend could be
counter-productive).

note that client/server wasn't directly the problem. client/server was
reflection of the growing computational power of all these things on
desktops ... and required the company to adapt. It was possible to see
this in the 80s and formulate what was needed ... but change was being
blocked. In the late 80s, a senior disk engineer got a talk scheduled
at the internal, world-wide, annual communication group conference
... and opened his talk with the statement that the communication
group was going to be responsible for the demise of the disk division.

The issue was that the terminal emulation/SNA paradigm had a
stranglehold on the datacenter and large amounts of data was starting
to leak out to more distributed computing friendly platforms. The disk
division could see the leading edge of this in disk sales ... and
formulated several products to address the situation ... however,
since the communication group had strategic ownership of everything
that crossed the datacenter walls, the disk division was constantly
blocked. misc. past posts mentioning terminal emulation
http://www.garlic.com/~lynn/subnetwork.html#emulation

The problem & solutions were understood before the early 90s, but
there were forces attempting to preserve the stanglehold status quo
and blocking any adaptation to changing environment. A major theme in
Boyd's briefings was prevailing by adapting faster than your
competition.

Mike Hore <mike_horeREM@OVE.invalid.aapt.net.au> writes:
Ummm, I think it goes a loooong way further back - IBM had a tradition
of avoiding words that made computers sound in any way human. I just
checked the 701 manual (from Bitsavers), dated 1953, and they refer to
"electrostatic storage". Likewise the 704 manual talks about "core
storage". Memory has always been "storage" in IBM-speak.

In IBM's most important product announcement since the System/360 in
1964, the IBM System/370 is introduced. Able to run System/360 programs,
the System/370 is one of the first lines of computers to include
"virtual memory" technology, a technique developed in England in 1962 to
expand the capabilities of the computer by using space on the hard drive
to accommodate the memory requirements of software.

then there is this ... from science center in the 60s regarding tss/360
might not know what it was getting into ... "VM and the VM Community:
Past, Present, and Future" ... several formats can be found here:
http://www.leeandmelindavarian.com/Melinda/

from above:

What was most significant was that the commitment to virtual memory was
backed with no successful experience. A system of that period that had
implemented virtual memory was the Ferranti Atlas computer, and that was
known not to be working well. What was frightening is that nobody who
was setting this virtual memory direction at IBM knew why Atlas didn't
work.

the other point of the previous post ... was global vis-a-vis local LRU
... in the late 60s, there was some amount of academic work in "local
LRU" ... when i was doing global LRU as an undergraduate at the univ.

more than decade later, Jim Gray asked me to help with co-worker at
Tandem being blocked getting his PHD thesis in the area of global LRU
... recent post
http://www.garlic.com/~lynn/2011c.html#8 The first personal computer (PC)

from above:
According to the analysis of the Project on Defense Alternatives,
between 1998 and 2010 Congress appropriated to the Pentagon $2.144
Trillion (with a "T") more than was anticipated by the 1999 "baseline."
Of that amount, $1.113 Trillion was spent on the wars in Iraq and
Afghanistan, and $1.031 Trillion was added to "base" (non-war) Pentagon
spending. (See p. 3 of PDA's study, "An Undisciplined Defense:
Understanding the $2 Trillion Surge in US Defense Spending" at

I basically concur with PDA's numbers, which are from DOD and OMB budget
data as described on p. 61.)

What did you get for that extra $1 Trillion? Basically, you got a
smaller Navy and Air Force and a tiny increase in the size of the Army.
As an extra bonus, the hardware those forces use are now older than they
were in the Clinton administration in 1998.