"yes its me" <yim89876@gmail.com> writes:
How odd that all those airline booking systems, banking systems, etc etc etc
don't just use plain text files. There might just be a reason why they
don't.

celebration for Jim Gray pointed out his formulization of transaction
semantics significantly advanced computers for financial operations
... it gave the financial auditors higher level of confidence in the
computer records.

I've periodically mentioned that there were various trade-offs made in
system/r & RDBMS that made financial transactions (atm cash machines,
electronic commerce, etc) more efficient.

on the other hand, I've previously posted about nearly 20yrs ago about
being brought in to the largest airline res system to look at system and
start with their ten "impossible" things for route/flight
search/selection.

from 1950s, route/flight information was larger than could be fit in
computer memory ... so process grew up lots of human effort to build
indexed disk file of possible solutions (actually turns out restricted
subset of possible solutions) ... a paradigm that continued to evolve
over 40yr period.

I looked at it and realized that the complete flt information for all
commercial airlines and all airports in the world could fit in 90s
computer memory (orders of magnitude smaller than the database of
restricted subset of possible solutions). Then with some optimization it
was possible to do a search of the in-memory image of all flts and come
up solutions in 1/100th pathlength of doing index file (dbms) lookup of
restricted subset of possible solutions. With all the information, it
was then possible to implement (and demonstrate) all ten impossible
things ... and still be ten times faster pathlength than the index file
lookup.

then came the hand-wringing ... it turns out that an organization of
several hundred people had grown up around the care & feeding of the
index file of restricted subset solutions ... which would be totally
obsoleted (threatening large number of jobs ... including executives in
charge). finally they said that they hadn't wanted me to actually fix the
ten impossible things ... they had just wanted to tell the board of the
parent company that I was working on it (turns out one of the board
members had been an ibm executive that I had known 15yrs earlier).

Stockman in "The Great Deformation: The Corruption of Capitalism in
America" pg464/loc9995-10000:
IBM was not the born-again growth machine trumpeted by the mob of Wall
Street momo traders. It was actually a stock buyback contraption on
steroids. During the five years ending in fiscal 2011, the company
spent a staggering $67 billion repurchasing its own shares, a figure
that was equal to 100 percent of its net income.

pg465/10014-17:
Total shareholder distributions, including dividends, amounted to $82
billion, or 122 percent, of net income over this five-year
period. Likewise, during the last five years IBM spent less on capital
investment than its depreciation and amortization charges, and also
shrank its constant dollar spending for research and development by
nearly 2 percent annually.

... snip ...

Stockman goes into some detail how stock buybacks (reducing number of
shares and increasing corp. value/share) is being used by top
executives to significantly increase their bonuses.

The Stockman scenario isn't about IBM owning stock or directly the
number of outstanding shares ... it is top executive bonuses tied to
price/share ..... reducing the number of shares increases aggregate
corporate value divided by number of outstanding shares ... which
tends to drive up price/share; also reducing number of shares
increases earnings per share ... which tends to drive up price/share.

Stockman spends lots of time on top executives totally absorbed with the basis for their bonuses.

from above:
If you review any of the numerous guides prepared for directors of
corporations prepared by law firms and other experts, you won't find a
stipulation for them to maximize shareholder value on the list of
things they are supposed to do. It's not a legal requirement. And
there is a good reason for that.

Directors and officers, broadly speaking, have a duty of care and duty
of loyalty to the corporation. From that flow more specific
obligations under Federal and state law. But notice: those
responsibilities are to the corporation, not to shareholders in
particular.

To quote president Roosevelt about appointing Kennedy as first head of
SEC ... it was because Kennedy knew all the tricks. Last year's USNI
history conference on cybercrime and cyber warfare had Mitnick as
featured speaker ... for somewhat similar reason.

In the IBM specific numbers ... rather than attacking the credibility
of the messenger ... the better approach would be to provide evidence
that the quoted numbers aren't correct.

so are the numbers right or wrong? would it be better if the numbers
were presented eliminating any quoted source.

as an aside IBM marketing got a reputation for FUD using it to
obfuscate and misdirect. IBM marketing FUD really started to reach
peak during the IBM Future System period when IBM was killing off all
its 370 products. The lack of 370 products during the FS period is
also credited with giving clone processors a market foothold. Then
when FS failed, there was mad rush to get products back into the 370
pipeline ... but it was then decades of playing catchup.

note that they have separate issue ... there is the Success of
Failure period from last decade ... where they identified a
whistleblower and charged him with all sorts of serious (fabricated)
offenses ... which they eventually dropped years later. Congressional
investigation into the circumstances put the agency on probation and
not allowed to manage its own projects (lot of current situation could
be attributed to the continued heavy involvement of for-profit
companies)
http://www.govexec.com/excellence/management-matters/2007/04/the-success-of-failure/24107/

from above:
Another reason why executives, in particular, may prefer share
buybacks is that executive compensation is often tied to executives'
ability to meet earnings per share targets. In companies where there
are few opportunities for organic growth, share repurchases may
represent one of the few ways of improving earnings per share to meet
targets. Thus, safeguards should be in place to ensure that increasing
earnings per share in this way will not affect executive or managerial
rewards

'Free Unix!': The world-changing proclamation made 30yearsagotoday

hancock4 writes:
In some smaller mainframe batch and on-line applications, a VSAM file
(or set of files) is easier to code, more efficient in execution, and
easier to maintain than a database structure.

On the mainframe, one advantage of VSAM over database is that everyone
knows VSAM, but there are several database systems available for the
mainframe and each has some differences; not all mainframe programmers
know all the differences.

Some DBA's go nuts with normalization, creating far too many tables
out of their zeal not to have any wasted fields.

this was somewhat the argument during the early days of system/r
(sql/rdbms) with criticism by the IMS group (IMS still handles large
proportion of ATM cash machine and other financial transactions).

IMS group would say that the implicit RDBMS indexes doubled the disk
space (for same application in IMS) and increased the number of disk
reads by 4-5 times (plowing thru the indexes on disk).

System/R counter was that IMS explicitly exposed record numbers to
applications ... which significantly increased human administrative and
management effort.

This started to shift in the 80s with significant increase in disk
capacity and reduction in cost/bit (mitigating the double in disk space
size for implicit indexes) and increase in system memories ... allowing
caching for indexes ... reducing the associated overhead to read for
every access. At the same time, explosion in computing systems (because
drop in prices) drastically increased demand for skilled expertise (far
beyond what could be supplied for an IMS everywhere world).
http://www.garlic.com/~lynn/submain.html#systemr

The RDBMS "normalization" was an upfront resource intensive effort for
RDBMS ... expecially complex ones. In one large corporation there was
report that they found 6000 different RDBMS that had 90% of the
information in common. The nature of RDBMS and normalization
requirements made it very mission-specific oriented. For any specific
mission (business process) it frequently became easier to adapt an
existing RDBMS, eliminating stuff not needed and adding just what was
required. Radically reduced the effort for that specific business
process ... but significantly increased overall enterprise costs.

I've periodically mention that same time I was involved in System/R, I
also got sucked into helping with the implementation of another kind of
relational implementation ... that didn't have the upfront mission
optimization of RDBMS. It had explicit bidirectional links between every
field and indexed every field. It was different from "network" DBMS in
that the links weren't direct record pointers but used a content
addressable paradigm for indexing links (analogous to the index in RDBMS
for primary field). The equivalent of normalization was effectively done
as side-effect of loading the data.

The difficulty of normalization frequently involves discarding
information that isn't aboslutely necessary for specific operation (this
makes it difficult for new operations where there are still a lot of
unknowns).

Recent comparison of a large, complex, real-world data source ... they
spent a year on normalization before loading and was then not
discovering all the information they had hoped for. I was able to
demonstrate from raw start to having loaded all data within a week
elapsed time and was discovering information (that they hadn't been able
to find after 18months of the RDBMS based effort).

50th anniversary S/360 coming up

hancock4 writes:
Well, several huge mistakes in the recent project were (1) rolling it
out when it was clearly known it wasn't ready, and (2) rolling it
nationwide all at once instead of having a modest trial state and
scaling up based on that experience. (I don't know if the original
system development schedule was realistic).

I can't help but wonder if the principals involved in the recent
system had no clue about the S/360 OS troubles or ever heard of the
mythical man month. Heck, I wonder if they ever heard of S/360.

recently on the ibm-main mainframe mailing list (originated in the 1980s
on the old bitnet) ... there was lots of discussion of the recent IBM
website outages and the ACA website problems ... one of my posts which
includes some number of recent IBM failed efforts:
http://www.garlic.com/~lynn/2013m.html#103

50th anniversary S/360 coming up

hancock4 writes:
Way back then alcohol was a routine part of most business functions.
In the 1970s, I attended a train demonstration, and as soon as we got
underway they broke out the booze. People routinely ordered drinks
during business lunches that the vendors were paying for. Office
parties could get way out of hand (see the holiday party scene in "The
Apartment"). Indeed, alocohol was routinely shown in scenes in many
old movies and TV shows.

back in the 70s, I remember attending corporate functions where dinner
with lots of people in the ballroom and there would be no alcohol. I
remember one at hotel in atlanta where I slipped the waiter to bring
vodka in water glasses to the table.

--
virtualization experience starting Jan1968, online at home since Mar1970

John Levine <johnl@iecc.com> writes:
Um, the Congress has been changing the law to protect Disney since
1998, arguably since 1976. How about if they just stop now?

When Disney made movies in the 1920s through 1970s, they presumably
thought 56 years was long enough, since if they didn't, they wouldn't
have made the movies. What has changed about the movie business so
that you can't make money in 56 years, but can in 95?

I think it is that you can make more money in 95yrs than you can in
56ys (not that you couldn't make money in 56 but could in 95).

the movie industry is also somewhat notorious for cooking the books
to hide profit, example:

from above:
If you follow the entertainment business at all, you're probably well
aware of "Hollywood accounting," whereby very, very, very few
entertainment products are technically "profitable," even as they earn
studios millions of dollars. A couple months ago, the Planet Money folks
did a great episode explaining how this works in very simple terms.

... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

a little over decade ago we were in london ("the city") giving
presentation about fraud (and how to compromise point-of-sale electronic
payments at retail stores) to Lloyd principles involved in providing
retail store fraud insurance (and then to have lunch with chairman of
Lloyds).

during presentations my wife had episode where she had trouble thinking
and was afraid that it might be stroke. we went to hospital and had cat
scan and other diagnosistics and by the end of the day it was declared
to be a "visual migraine" ... which was big relief. also there was never
any bill.

--
virtualization experience starting Jan1968, online at home since Mar1970

the price of share registration included evening SCIDS (i believe up
through the 80s) ... ballroom where there was evening open bar. the
claim was since IBM didn't allow for alcohol ... it also couldn't show
up on expense reports for reimbursements ... aka SCIDS open bar (and
included in SHARE registration) was primarily for the benefit of IBMers

... in the 70s, I had been told SCIDS stood for "Society for Continous
Inebriation During Share"

another version:
SCIDS - skids n. A 6-hour social occasion, held every night of SHARE and
GUIDE meetings, during which customers (sometimes successfully) ply
IBMers with alcoholic beverages in plastic cups to try to find out
what's coming next. Originally informally known as Share Committee for
Inebriates, Drunkards, and Sots, but now officially stands for Social
Contact and Informal Discussion Sessions or SHARE Committee for Informal
Discussion Sessions. More familiarly known as the Society for
Cultivation of Indiscretions via Drinking Sessions.

recent bill moyer segment on "free trade" treaty aggreement is that it
is being conducted in secrecy mostly with industrial representatives
with objective increasing/extend patent and copyright provisions,
increase drug prices, eliminating lots of regulation, environmental,
financial, etc ... with requirement that countries have to align their
laws with what is in the treaty (comment that nearly all has nothing to
do with "free trade" ... treaty label being industry obfuscation and
misdirection).

America's Defense Amnesia

Part of spreading Success of Failure culture is privatizing of the
gov by for-profit companies is that they realize they can make more
money off series of failures in lieu of immediate success.
Congressional investigation resulted in putting the agency on
probation and not allowed to manage its own projects (which may have
just been ploy for more for-profit companies .... since congress
effectively gets kickbacks from the large beltway bandits)
http://www.govexec.com/excellence/management-matters/2007/04/the-success-of-failure/24107/

various coin tactics involve isolating insurgents from support,
locking down the borders to prevent outside supplies, making it
difficult for insurgents to get support from rest of population and
controlling news and public opinion.

one of the scenarios from "Fiasco: The American Military Adventure in
Iraq" and the law of unintended consequences was the fabricated claims
about WMDs resulted in the Iraqi invaders told to bypass ammo dumps
(loc2902 & loc2910). Later when they got around to going back, a
million metric tons had evaporated (providing an enormous cache of
supplies)

There have been some amount about timidity of commander in executing
(Boyd's) left hook in desert storm because of concerns about
overrunning supply lines. ELP reference gives lots of reasons why
Abrams are tightly tied to their cumbersome and slow moving logistics
infrastructure with little degrees of freedom (which Boyd may/would
not have been able to appreciate)

from above:
By 1990 Boyd had moved to Florida because of declining health, but
Cheney (then the Secretary of Defense in the George H. W. Bush
administration) called him back to work on the plans for Operation
Desert Storm.[10] [11] Boyd had substantial influence on the ultimate
"left hook" design of the plan.

... snip ...

several references are that republican guard escaped because the left
hook wasn't there to stop them (various references are that the
commander responsible for the left hook thought he would exceed his
supply lines). ELP reference says they got away because Abrams were so
much slower ... but then goes on that it wasn't the top speed of the
Abrams ... going into detail how tightly tied they were to their
logistics infrastructure; enormous consumption of jet fuel, shorter
range on each fillup, much higher maintenance, lower reliability (lots
of hrs spent in the shop), etc.

1) Boyd would have been less likely to know that the Abrams wouldn't
be able to get in place to execute the left hook, 2) tankers would
more likely to have known that the Abrams couldn't get in place to
execute the left hook 3) there are problems all around

I was blamed for online computer conferencing on the internal network
during the late 70s and earlys 80s. Following was during "Tandem Memo"
online discussion period (folklore is that when the executive committee
was told about online computer conferencing and the internal network,
5of6 wanted to fire me) ... aka large mass creates blackhole effect:

You have the wrong emphasis in "few good people" the correct emphasis
is on few. It is pretty obvious that controlling many has not
been very successful. It is easy to think of POK has IBM's contribution
to HEW's full employment of the mentally handicapped. Examining the
individuals fails to support that theory. What is closer to the truth
is the black hole theory. Creating IBM within IBM is an attempt
to isolate a body from the gravitational effects of the larger mass. A
"few good people" is a good rallying cry. It would be wonderful if we
could have the whole company a collection of a "few good people"
groups; we have to start someplace tho. Our problems can be tackled
from two approches. The simple and well understood solution is to
operate small effective groups to get productive results. The
difficult, not well understood problem is how to organize large masses
of people while still remaining productive. The corporation is
currently lagging in productive results and we are going to need some
soon. Immediate problem is to motivate a "few good people" to get
results in the short term. Longer term problem is the whole
organization. Only working on solving the second problem may actually
reduce the problem to the first solution (the masses have all
dissolved).

and:
Tandem Memos - n. Something constructive but hard to control; a fresh of
breath air (sic). That's another Tandem Memos. A phrase to worry middle
management. It refers to the computer-based conference (widely
distributed in 1981) in which many technical personnel expressed
dissatisfaction with the tools available to them at that time, and also
constructively criticised the way products were are developed. The
memos are required reading for anyone with a serious interest in quality
products. If you have not seen the memos, try reading the November 1981
Datamation summary.

How the IETF plans to protect the web from NSA snooping

How the IETF plans to protect the web from NSA snooping; An IETF plan
looks to HTTP 2.0 to help protect internet users from the NSA.
http://www.networkworld.com/news/2013/103113-nsa-ietf-275483.html

We were brought into a small client/server that wanted to do payment
transactions on their server; they had also invented this technology
called SSL they wanted to use, the result is now frequently called
"electronic commerce". we had to map SSL technology to the payment
business process.

several of the pariticipants were heavily involved in privacy issues
and had done extensive detailed public surveys and found the number
one issue was identity theft, specifically the kind resulted in
fraudulent financial transactions frequently as a result of data
breach. an issue was that typically institutions take security
measures as measures against threats & risks to the institution. in
these cases, the risk wasn't to the institutions but the individual
account owners and as such there seemed to be little or nothing being
done. there was some anticipation that the publicity from the data
breach notifications would prompt countermeasures.

one of the issues in many of these cases is the value of the
information to the crooks is 100 times more than the value to many
institutions (i.e. prior financial transaction that crooks can use to
perform fraudulent transactions). For instance the value of
information to the merchant is the profit from the transactions
(possibly only a couple of dollars), while the value of the
information to the crooks is the credit limit or account balance
(possibly several hundred to several thousand) ... as a result, crooks
may be able to afford to spend a hundred times more attack a system
than can be spent defending.

another issue in the current electronic payment information paradigm
is its dual use characteristic ... to prevent fraud, it needs to be
kept completely confidential and *NEVER* divulged ... while at the
same time it is need in dozen of business processes at millions of
locations around the world (we've periodically claimed that even if
the planet was buried in miles of information hiding encryption, it
wouldn't prevent information leakage).

we were brought into the x9a10 financial standard working group (which
had been given the requirement to preserve the integrity of the
financial infrastructure for *ALL* retail payments) and were
co-authors of x9.59 financial transaction standard which slightly
tweaked the paradigm and eliminated the value of previous transaction
information (& account numbers) to crooks ... and therefor eliminated
the major motivation for most of current breaches. it also eliminated
the current major use of SSL in the world today ... this earlier stuff
we worked on for electronic commerce. x9.59 reference
http://www.garlic.com/~lynn/x959.html#x959

--
virtualization experience starting Jan1968, online at home since Mar1970

Bounded pointers

tymshare did gnosis in the late 70s & early 80s for mainframe 370
... which was spun-off as keykos when M/D bought tymshare (disclaimer: i
was brought in to evaluate gnosis as part of the spinoff).
http://cap-lore.com/CapTheory/upenn/

one of the objectives of gnosis was to provide ability to offer 3rd
party applications on commercial online service bureau platform ...
with use charges accounting traced back to each application use
(allowing prorated remittance to the 3rd parties). I estimated that
1/3rd of system pathlength was involved in that accounting. in
transition to keykos ... all that accounting overhead was removed
significantly improving keykos throughput/performance.

my biggest quibble with "Great Deformation" is that it glosses over
the rating agencies selling triple-A ratings on toxic CDOs (when they
knew they weren't worth triple-A, from congressional Oct2008
hearings). Those triple-A ratings enabled the over $27T done during
the bubble ... and that $27T significantly dwarfs many of the other
issues cited (radically changing the mortgage market, securitized
mortgages being used to obfuscate the underlying values and
transactions being routed thru wallstreet where enormous commissions
and fees were skimmed). reference to over $27T
Evil Wall Street Exports Boomed With 'Fools' Born to Buy Debt
http://www.bloomberg.com/apps/news?pid=newsarchive&refer=home&sid=a0jln3.CSS6c

from above:
Although the robot technically cheats because it watches your hand and
can recognize what shape you are intending to make and beat it before
you even know what is happening. Apparently it takes about 60ms for
you to shape your hand, but the robot can recognize the shape before
it is completed, and only takes 20ms to counter your shape so the
results appear to the human opponent to be virtually simultaneous.

late 70s, a co-worker wrote a multi-user space war game with a server
and client interfaces that ran on (fast) text display screens
... supporting distributing clients over the network. The syntax
between the server and client was fairly simple and fairly soon,
people started writing robotic clients that would beat all humans
(being able to react significantly faster). Since it wasn't possible
to differentiate whether it was real human client or 'bot
... eventually the server was modified to debit energy use
non-linearly as interval between specific commands dropped below
threshold ... which somewhat leveled the playing field.

jperryma@PACBELL.NET (Jon Perryman) writes:
• UNIX: TCP/IP was not publicly available until the 70's. Prior to
that, simple communications were available.

* z/OS: SNA existed long before TCP/IP was available. SNA was a
robust, reliable and secure communications methodology. Once TCP was
became available, we had the same situation as Betamax versus VHS. TCP
won.

arpanet was host-to-host with IMPs from late 60s ... and in many ways
similar to SNA (but well before SNA). big problem was that it wouldn't
support large distributed ... and frequently autonomous, decentralized
infrastructure ... and so start was made on internetworking protocol.

the great change over of arpanet to internetworking (tcp/ip) protocol
came 1Jan1983. at the time there was approx. 100 IMP network nodes with
around 255 connected hosts.

by comparison in 1983, the internal network was rapidly approaching 1000
nodes which it passed Jun1983 ... some internal network references for
1983 in this past post (in some sense it had gateway in every node which
greatly simplified semi-autonomous expanding the network and was major
factor in it being larger than arpanet/internet from just about the
beginning until possibly late '85 or early '86)
http://www.garlic.com/~lynn/2006k.html#8
other past posts mentioning internal network
http://www.garlic.com/~lynn/subnetwork.html#internalnet

one of the issues was SNA/VTAM only supported up to 56kbit links ... in
the mid-80s, we were having some equipment built on the other side of
the pacific. Friday before a trip, the communication group announced a
new communication discussion group with the following definitions

As part of trying to justifying only having support up to 56kbit links,
the communication group prepared a report for the executive committee
why customers wouldn't want T1 support until sometime in the 90s. As
part of the report, they did a study of 37x5 "fat-pipe" support at
customers ... multiple parallel 56kbit links treated as single logical
link. They showed that the number dropped to zero around five or six
parallel 56kbit links. What they possibly didn't realize was that telco
tariffs for 5 or 6 56kbit links were about the same as single T1 link
... and customers would switch to full T1 and non-IBM boxes. At the
time, we did a trivial customer survey of installed T1 links and found
over 200.

I was also working with various institutions and NSF ... and we were
suppose to get $20M to tie together the NSF supercomputer centers. Then
congress cut the budget and a few other things happened, and finally NSF
released an RFP. Internal politics prevented us from bidding on the RFP
... the director of NSF tried to help, writing the company a letter
(copying the CEO) but that just made the internal politics worse (as
references to what we already had running was at least 5yrs ahead of all
RFP responses).

along the way the communication group was spreading all sorts of FUD and
misinformation (regarding NSF supercomputer backbone) ... some of the
misinformation email was collected by somebody in the communication
group and forwarded to us ... reference here (heavily redacted to
protect the guilty)
http://www.garlic.com/~lynn/2006w.html#email870109

In later part of the 80s, the communication group attempted a patchwork
solution with the 3737 ... a box that supported T1 link ... but only had
aggregate throughput of 2mbit/sec (T1 is full-duplex 1.5mbit/sec or
3mbit/sec aggregate, EU T2 is full-duplex 2mbit/sec or 4mbit/sec
aggregate). Because VTAM line processing wouldn't keep the faster links
busy ... the 3737 spoofed a CTCA to the host vtam and immediately ACKed
the local VTAM transmission. The 3737 then had huge amount of buffering
and non-VTAM line paradigm with remote 3737 trying to keep line running
at full-speed. past posts with more 3737 details
http://www.garlic.com/~lynn/2011g.html#75http://www.garlic.com/~lynn/2011g.html#77

if there was to be any conversion of the internal network, it would have
been significantly more cost effective and better performance if the
internal network had been converted to tcp/ip ... similar to what bitnet
did.

late 80s, a senior disk engineer got a talk scheduled at an annual,
internal, world-wide communication group conference supposedly on the
subject of 3174 performance ... but opened the talk with the statement
that the communication group was going to be responsible for the demise
of the disk division. The communication had corporate strategic
ownership of everything that crossed the datacenter wall. They were
strenuously fighting off distributed computing and client/server trying
to preserve their dumb terminal paradigm and install base. The disk
division was seeing drop in disk sales as data was fleeing the
datacenter for more distributed computing friendly platforms. The disk
division had come up with a number of solutions to correct the problem,
but was constantly vetoed by the disk division. This was significant
factor contributing to company going into the red a few years later.

elardus.engelbrecht@SITA.CO.ZA (Elardus Engelbrecht) writes:
Around 1990 and so when death of mainframe has been predicted [1],
someone said to me: The technology to completely replace big iron has
not been in place properly. Now, it is still, to my astonishment,
somewhat true! Rather, new things evolved in the meantime.

mid-80s, top executives were predicting revenue would double (to
approx. $215B in today's dollars) mostly based on mainframe and
instituted massive internal building program to double mainframe
manufacturing capacity ... this was just at the start when things were
began to go in the opposite direction (and it wasn't exactly career
enhancing to point it out, also see previous post to the reference about
drop in disk sales and communication group stranglehold on the
datacenter).

early 90s, the company went into the red and top executives re-orged the
company into the 13 "baby blues" in preparation for breaking up the
company ... this was before the board brought in Gerstner to reverse the
breakup and resurrect the company (he refocused the company from
hardware products to services). The people in POK had been expecting to
be totally shutdown and were sending out email referencing "would the
last person to leave POK, please turn out the lights".

Mainframe sales have been running around $5B/annum (compared to the
prediction for $200B+) ... or the equivalent of approx. 180
max. configured z196.

--
virtualization experience starting Jan1968, online at home since Mar1970

... although there was freely available tcp/ip protocol implementations
available on lots of platforms ... and could be used for private
networks ... even if they didn't connect to regional and/or backbone

i've periodically commented that tcp/ip was the technology basis for the
modern internet, nsfnet backbone was the operational basis for the
modern internet and (finally) cix was the business basis for the modern
internet.

trivia, until Postel ("rfc editor") passed, he let me do part of STD1.

as undergraduate in the 60s, I did a lot of operating system changes
... both os/360 and (virtual machine) cp67. cp/67 shipped with 1052 and
2741 terminal support ... but univ. also had ascii tty terminals ... so I
did the work to add ascii tty terminal support. cp/67 did dynamic
terminal type identification for 1052 & 2741s ... so I tried to extend
it for tty support (this was picked up by the science center and shipped
in standard cp67) ... which didn't quite work the way I wanted. leased
lines were fine ... but I wanted a single dialup number ("hunt group")
for all terminals. the problem was IBM had taken short cut in terminal
controller; it was possible to dynamically change the line-scanner for
each port ("SAD" CCW) ... but the line-speed (oscillator) was hard-wired
for each port.

somewhat as a result the univ. started a clone controller project ...
reverse engineer a channel interface board, program an interdata/3
minicomputer to emulate ibm terminal controller (with own channel
interface board) ... supporting both dynamic terminal type and dynamic
terminal line-speed. this later was extended with an interdata/4 for the
channel interface and cluster of interdata/3s for port interfaces. this
was made available to interdata which marketed it commercially (later
Perkin/Elmer bought Interdata and continued to sell under PE logo).
Four of us got written up for being responsible for some part of the
clone controller business ... some past posts
http://www.garlic.com/~lynn/subtopic.html#360pcm

early 70s, IBM had the Future System project (would completely replace
360/370) ... a major motivation was to significantly raise the bar for
clone controllers (lack of 370 products during this period is also
credited with giving clone processors a market foothold). Later when
FS imploded there was mad-rush to get products back into the 370
pipelines. some past posts
http://www.garlic.com/~lynn/submain.html#futuresys

there have been claims that the extreme complexity in the PU5/PU4
(VTAM/NCP) interface was attempt to meet the base FS objectives of
significantly raising the bar as countermeasure to clone controllers
(major design requirement for sna architecture).

--
virtualization experience starting Jan1968, online at home since Mar1970

jperryma@PACBELL.NET (Jon Perryman) writes:
On the other side, Unix has seen many of it's improvements because of
z/OS. You may not think so but look at the timelines and make
comparisons. The last one I personally saw was high availability. IBM
implemented SAP/HA on z/OS and SAP received the SAP/HA
modifications. A few years later, Linux-HA came out to support SAP/HA.

about the same time that SNA architecture was originally being created
(major requirement was complexity of vtam/ncp interface as
countermeasure to clone controllers), my wife was co-author of
peer-to-peer networking architecture (internal document AWP39).

which saw little uptake (except for ims hotstandby) until sysplex (&
parallel sysplex) little uptake and periodic battles with the
communication group trying to force her into using sna/vtam for
loosely-coupled operation, resulted in her not staying long in the
position.

i was also asked to write a section for the corporate strategic
continuous availability document ... however both Rochester (AS/400) and
POK (mainframe) complained that they couldn't meet the specification
... and the section was removed.

shortly later, cluster scaleup was transferred and we were told we
couldn't work on anything with more than four processors (possibly
contributing was mainframe DB2 complaining that if we were allowed to
proceed, it would be at least five years ahead of them).

from above ... non-mainframe (rs/6000) DB2
In October 2009, IBM introduced its second major release of the year
when it announced DB2 pureScale. DB2 pureScale is a database cluster
solution for non-mainframe platforms, suitable for Online Transaction
Processing (OLTP) workloads. IBM based the design of DB2 pureScale on
the Parallel Sysplex implementation of DB2 data sharing on the
mainframe. DB2 pureScale provides a fault-tolerant architecture and
shared-disk storage. A DB2 pureScale system can grow to 128 database
servers, and provides continuous availability and automatic load
balancing.

50th anniversary S/360 coming up

hancock4 writes:
AFAIK, the model 30 was a good machine, reasonably price, and quite
suitable for the installations it was designed for--basically former
1401 shops. I believe it was the most pouplar S/360 model.

• Of the 26,000 IBM computer systems in use, 16,000 were S/360 models
(that is, over 60%). [Fig. 1.311.2]
• Of the general-purpose systems having the largest fraction of total
installed value, the IBM S/360 Model 30 was ranked first with 12%
(rising to 17% in 1969). The S/360 Model 40 was ranked second with 11%
(rising to almost 15% in 1970). [Figs. 2.10.4 and 2.10.5]
• Of the number of operations per second in use, the IBM S/360 Model 65
ranked first with 23%. The Univac 1108 ranked second with slightly
over 14%, and the CDC 6600 ranked third with 10%. [Figs. 2.10.6 and
2.10.7]

... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

Aging Sysprogs = Aging Farmers

jperryma@PACBELL.NET (Jon Perryman) writes:
2. It is the only tool where we can easilyt segregate interactive
versus long running programs. This allows WLM give more resources to
interactive users because they are personally waiting. Sysprog's
encourage it's use by setting WLM such that a user get's less than
batch priority when they use to many resources.

as an undergraduate in the 60s, I did dynamic adaptive resource
manager for cp67 ... which was picked up and released as part of the
product. A default policy was "fair share" resources ... nobody got
more resources than anybody else ... regardless of interactive or
background/batch characteristics ... default policy gave interactive
more timely resources ... but not more resources. some past posts
http://www.garlic.com/~lynn/subtopic.html#fairshare

in the simplification morph from cp67 to vm370, the dynamic adaptive
code was dropped ... however customers would continue to advocate in
SHARE to bring it back.

one of my hobbies in IBM was producing, distributing & supporting
production systems for large number of internal datacenters
... including the internal IBM world-wide sales&marketing HONE systems
... some past posts
http://www.garlic.com/~lynn/subtopic.html#hone

this was also during the Future System period ... when 370 efforts
were being shutdown ... but I continued with 360/370 and not exactly
career enhancing ... critized what they were doing ... some past posts
http://www.garlic.com/~lynn/submain.html#futuresys

when Future System finally imploded, the mad rush to get products back
into the 370 pipeline contributed to decision to pickup a lot of stuff
I had been doing (and was running widely inside the company) and
release it to customers.

however, they managed to make the case that kernel software should
still be free. With the lack of products during the future system
period and clone processors getting market foothold ... a decision was
made to start charging for kernel software ... and the decision was
made to make my scheduler a separate kernel component and the guinea
pig for starting to charge for kernel software (as a result I got to
spend a lot of time with the business and legal people about policies
for kernel software charging).

later with the big explosion in online & interactive vm/4300 machines
... both with customers and internally ... the company made a decision
that vm/cms was the strategic interactive offering. It was then that
the TSO product manager asked me if I would port my dynamic adaptive
resource manager to MVS ... hoping that I could help fix the really
horrible TSO human factor characteristics ... old email reference:
http://www.garlic.com/~lynn/2006b.html#email800310

as I've mentioned periodically ... I declined the offer ... in part
because there was enormous number of other MVS issues (not just
scheduling) that affected its poor interactive characteristics.

as an aside, one of the problems I had (re)releasing my dynamic
adaptive resource manager ... was somebody from Armonk (with past
history in POK MVS) non-concurred with approval for the release
because it didn't have a lot of manual tuning knobs (because that was
state-of-the-art at the time with MVS having enormous number of manual
tuning knobs). I tried to explain that dynamic adaptive eliminated the
necessity for all those manual tuning knobs ... since it was
repeatedly calculating them dynamically adjusting for configuration
and workload. I finally created a "joke" ... I put in manual tuning
knobs ... and described the algorithms, code in detail as well as
shipping source code. The "joke" was that the dynamic adaptive code
had more degrees of freedom than the manual tuning knobs ... so any
knob choice could be compensated for by the dynamic code. All the code
was also packaged in a source module I named "STP" (after the
television commercials about the "racer's edge").

--
virtualization experience starting Jan1968, online at home since Mar1970

Serialization without Enque

tony@HARMINC.NET (Tony Harminc) writes:
It serializes happily against all the CS variations, TS, and the newer
interlocked-update instructions like ASI, LAA, and so on. And there
are cases where a simple ST or the like can interoperate usefully with
CS. For instance, if you update a counter with CS, it is safe to zero
it with ST.

compare-and-swap is atomic instruction ... it does the compare and
does the store if the compare matches ... it solves the problem of
interrupts and other processes doing something while
interrupted. normal process is if the compare doesn't match ... and
the store isn't done ... loop back and restart the operation.

the initial attempt to include compare-and-swap in 370 was rejected
because the pok favorite son operating system people said it wasn't
needed, that TS was more than sufficient for multiprocessor locking
(single kernel spin-lock). the 370 architecture owners said that to
get compare-and-swap justified for 370 (over pok objections), had to
come up with purposes other than multiprocessor locking. Thus was born
the examples (still in principles of operation) for interrupt-enabled,
multi-threaded applications (like large dbms) ... whether or not
running in multiprocessor configuration or not.

--
virtualization experience starting Jan1968, online at home since Mar1970

Aging Sysprogs = Aging Farmers

shmuel+ibm-main@PATRIOT.NET (Shmuel Metz , Seymour J.) writes:
Due to the NIH syndrome, VPAM and VIPAM cannot be imported from TSS.
However, I can provide you with a subroutine for the functionality of
BLDL, FIND and STOW for QSAM, with a single OPEN for multiple members.
You'd have to refit it for current releases.

note that this is account of one of the primary people responsible for
HASP ... then did a group that did something similar for MFT that he
called RASP ... the results was used as part of the justification for
moving to virtual memory for all 370s.
http://www.garlic.com/~lynn/2011d.html#73

however, in part because company wouldn't go ahead with it (possibly
also nih) ... he left and redid it at Amdahl (from scratch in clean
room) ... and even tho IBM wasn't to going to do anything with it
... there was still litigation and court ordered code examination
... which only was able to find a few lines of code that could be
considered similar.

Note that later AT&T had contracted with ibm to do a stripped down
TSS/370 kernel called SSUP which unix infrastructure was layered on
top.

There was somebody still at univ. that had done a native port of unix
to 370 ... and there was attempt to hire him in ibm ... but wasn't
successful and he went to amdahl where "UTS" was done instead. I knew
some of the people involved in both projects and there was some about
of internal politics and I got asked about it ... so i suggested that
they try and meld the projects ... somewhat along the lines of TSS
SSUP for AT&T ... which never happened. "UTS" during development was
referred to as GOLD for the element Au (or amadahl unix).

As an aside, when i was undergraduate, the univ. was talked into
upgrading 709/1401 combo to 360/67 supposedly for tss/360 ... however
at the time, tss/360 never really came to anywhere near production and
the 360/67 ran mostly as 360/65 with os/360 ... i got undergraduate
job responsible for care&feeding of the system. Jan1968 people from
science center to install cp67. I got to play with cp67 on weekends
... when i wasn't doing os/360 maint. post with fall 1968 share
presentation about some of the os/360 work (careful reorder of stage2
sysgen cards to optimally place datasets and pds members that improved
univ. student fortran job throughput by nearly three times) and cp67
(work) rewrote lots of cp67 that drastically cut pathlenth/overhead.
http://www.garlic.com/~lynn/94.html#18

however, sometimes on weekends, i had to work around ibm se playing
with tss/360. We did setup common benchmark for fortran edit, compile
and execute with simulate scripts and simulated users. turns out that
cp67 with 35 simulated users running the script outperformed and had
better interactive response than tss/360 with only 4 users running the
same script. In any case, i learned quite a bit about how tss/360 did
things wrong.

later at science center ... i did a paged mapped filesystem for
cp67/cms avoiding a lot of tss/360 performance issues (it was also
somewhat in competition with the multics group on the 5th flr that was
also doing paged mapped filesystem).

later when future system implodes and the mad rush to get products
back into 370 product pipeline ... that contributed to release some of
the 360/370 stuff all during the FS period ... recent reference in
this thread
http://www.garlic.com/~lynn/2013n.html#22

... but not the page mapped stuff ... presumably because page-mapped
filesystem was tainted with both tss/360 and FS efforts ... even tho i
could show three times greater throughput with cms paged-mapped
filesystem compared to standard cms filesystem (on the same hardware)
past posts mentioning page mapped filesystem
http://www.garlic.com/~lynn/submain.html#mmap

SNA vs TCP/IP

PaulGBoulder@AIM.COM (Paul Gilmartin) writes:
There's always a reason. Rarely is it an analogue of Gresham's Law,
to which one partisan attributed the triumph of UNIX over VMS ("Bad
software drives out good!") Betamax succumbed to the greater capacity
of VHS cartridges; a decisive advantage in the eyes of consumers at a
tipping point in time despite the higher quality of Beta in professionals'
view. For many years thereafter I saw Beta only in the kits of TV news
reporters on location. I think VHS had caught up in quality and Beta
in capacity, but both camps has too much capital investment to switch.

remember, sna didn't have internetworking and networking ... there was
central vtam that had mapping to device.

for a time i reported to the same executive as the person responsible
for APPN (internal architecture document awp164, i mentioned
previously my wife much earlier was co-author of peer-to-peer
networking, awp39) ... which provides networking layer ... i would
periodically chide the person to not waste their time trying to help
sna (because they wouldn't appreciate it) and come work on *real*
networking.

as it turns out the communication group did non-concur on the draft
announcement letter for APPN ... and it took six weeks of escalation
to resolve the issue ... where the APPN announcement letter was
carefully rewritten to not imply any relationship existed between APPN
and SNA.

trivia ... person responsible for DNS had worked at the cambridge
science center when he was student at MIT.

also in the mid-80s, i had gotten sucked into and effort to taking
some support done by one the babybells and turning it out as product
... we tried very hard to isolate the effort from internal political
influence of the communication group to block it ... which they
managed to do anyway (which can only be described as truth is stranger
than fiction). I probably didn't help things by doing a presentation
on the effort at one of the regular SNA architecture review board
meetings ... which had top technical and executives in the
audience. part of that presentation
http://www.garlic.com/~lynn/99.html#67

basically what the babybell had done was implement an NCP/SSCP
emulation on Series/1 ... which was significantly more powerful
computer that what was used for the NCP/37x5 controllers ... and then
actually run everything in real networking infrastructure ... except
at the boundary spoofing to host vtams. all resources were simulated
as "cross-domain" ... but was really fully distributed resource
management with no-single-point-of-failure.

the use of real networking within the operation of the infrastructure
made possible a lot of things that weren't possible in a pure sna/vtam
environment ... as well as having a much more powerful processor than
what were used in 37x5. part of the effort also included moving the
implementation from series/1 to rios (rs/6000) after the initial
release.

the issue was that OSI didn't have any internetworking layer (either)
... basically would be reverting to the pre-tcp/ip era of the 70s

iso also had policy that none of the ISO network-related standards
bodies could standardize any protocol that didn't conform to the OSI
model.

to top it off, there were various references that ISO didn't actually
have requirement that ISO standards be implementable ... in contrast
IETF (aka internet) has requirement that there has to be at least two
interoperable (different) implementations to progress in the standards
process.

disclaimer: i was involved in taking HSP to X3S3.3 (iso chartered
us standards group responsible for standards related to level3&4, aka
networking and transport ... in the OSI model). some past posts
http://www.garlic.com/~lynn/subnetwork.html#xtphsp

it was rejected for violating the osi model ... at least

1) supported an internetworking layer that doesn't exist in OSI
... non-existent layer that sits between bottom of layer4 and top of
layer3

SNA vs TCP/IP

edgould1948@COMCAST.NET (Ed Gould) writes:
A LONG LONG time ago we had bumped in to the maxsuba of 255.
IBM almost simultaneously came out with a outrageously expensive add
on (memory was $5000 a month) to get rid of it.
My management said NFW to the cost and told me to live with it. We had
to start turning away customers and then the heat started and they
hired a SNA "pro" and he came up with a few suggestions - but mind you
it was a 2 or 3 month buffer before we had no other option. (His $$
should have gone for bonuses to the networking staff, IMO).
Our people costs were high because it was trying to keep track of SNA
paths (of other users and ours) we finally bought the RTG software
which helped us but was another $100 (?) a month, Overall trying to
keep disparate networks in sync was an utter night mare.
I was/am a fan of SNA until it comes to (large) networks.

note that jes2 had similar problem. the code was brought over from
hasp (use to have the characters TUCC in cols 68-71) ... it used
left-over entries in the 255 entry hasp psuedo device table ...
frequently somewhere around 160, maybe 180 entries. furthermore any
traffic coming into jes2 node where either the origin node or the
destination node wasn't in the local table ... the traffic was
trashed. jes2 network had also jumbled the job control fields and the
networking fields in the header ... and jes2 had nasty habit of
crashing mvs receiving traffic from a jes2 at a different release
level.

quickly passed 255, any jes2 nodes had to be kept purely on the
periphery of the network. The standard network was much more robust
and in may ways had internetworking capability ... the internal
network was larger than the arpanet/internet from just about the
beginning until sometime late 85 or early 86.

also because of the issue with jes2 release node incompatibilities
tending to bring down mvs ... a large library of release specific jes2
drivers grew up for vnet. basically the vnet jes2 driver would convert
the jes2 headers into a canonical form ... and a specific vnet jes2
driver was started that corresponded to the directly connected jes2
system ... which converted from canonical form into form expected by
that jes2 system.

there was a infamous case at one point where mvs systems in hursely
were crashing because of new fields added to the jes2 systems in san
jose. the local hursely vnet systems were blamed ... because the
necessary changes hadn't been made to keep the hursely jes2 systems
from crashing mvs.

the original announce for jes2 networking also had big problem ... it
had been somewhat developed using methodology that predated the
charging for software ... even with a lot of the code being picked up
from customer site. the company was under mandate that the price
charged had to cover the distribution and maintenance costs ... but
also the price times the number of customers had to also cover the
upfront development costs ... but because of the expensive process
... there was no price for jes2 networking ... times the expected
number of customers (at the price) would meet the criteria to cover
all costs.

POK had also convinced the corporation to not announce any new vm370
features ... including the vnet networking support used to run the
company. The jes2 group got that reversed ... because they could
announce a combined jes2+vm370 product ... where the combined sales
were able to cover the jes2 costs (since the vm370 product costs were
almost negligible) ... aka some creative bookkeeping.

The same year that arpanet converted to internetworking (starting off
with about 255 hosts ... but the internetworking change-over removed
an enormous barrier to growth) ... the internal network passed 1000
nodes. some reference to the internal network activity that year.
http://www.garlic.com/~lynn/2006k.html#8

sometime after that year ... jes2 networking got around to changing
from spare slots in the 255 psuedo device table to 999 entry table
... but it was way too late to help with internal network (that had
already passed 1000 nodes). furthermore they still hadn't fixed the
release level incompatibility problems that could bring down the
receiving mvs system (making sure that jes2/mvs systems still had to
be restricted to the network periphery, with special vnet
filter/reformatter).

SNA vs TCP/IP

shmuel+ibm-main@PATRIOT.NET (Shmuel Metz , Seymour J.) writes:
You brought up error recovery. I was hinting that SDLC has error
recovery and IP doesn't. The error recovery in TCP is at a higher
level, and I don't see why you expect it to have lower overhead.

part of 80s protocol designs was that modern hardware transmission
technology moving to employ sophisticated forward error correct
(FEC) ... higher level protocols saw a lot fewer transmission errors
... higher level protocols could handle packets lost because of the
fewer packet transmission error ... but also things like packet drops
because of congestion at intermediate nodes.

however, there was also other sort of issues. i've mentioned being
brought in to consult at small client/server startup that wanted to do
payment transactions on their server, they had also invented this
technology called SSL (the result is now frequently called "electronic
commerce") ... and we had to map SSL technology to payment transaction
process.

one of the very early adopters was national sport retailer that was
expected increased web traffic during half-time of sunday professional
football. they had multiple lines into different places into the
internet backbone for availability. one of the complicating factors
was that some number of ISPs still took routers down on sunday for
maintenance (creating outages).

I had originally worked on using routed protocol to advertise
ip-addresses at different routes (for availability)... but the
internet backbone started the process of moving to hierarchical
routings ... so fault masking for availability had to fall back to
(DNS) multiple A-records (instead of host name mapping to single
ip-address, it would have a table of 2 or more ip-addresses).

I explained to the browser group that they had to support multiple
A-records and why (if unable to make connection with 1st ip-address,
continue to try and use any additional ip-addresses) ... they
responded with it was too complex ... i provided them with example
code from 4.3 tahoe/reno clients ... and they still said it was too
complex (it took something like another 12 months before i got them to
support multiple a-records; aka while i had absolute authority of all
implementation deployment for the webserver to payment gateway ... i
could only advise/recommend on the browser to webserver
implementation).

--
virtualization experience starting Jan1968, online at home since Mar1970

SNA vs TCP/IP

Robert Wessel <robertwessel2@yahoo.com> writes:
That's not a very valid comparison. SDLC is mostly a link level
protocol; IP, UDP and TCP are not. In many cases there is
considerable error recovery on links that IP is run over - if for no
other reason than the end-to-end error recovery in TCP works poorly if
there are too many errors. In any event, the lack of end-to-end error
checking and recovery in SNA is a major failing, as it requires near
perfect error management on every link and node between the two
endpoints.

a big issue with tcp throughput is slow-start as mechanism for
congestion control/avoidance ... aka in enormously large heterogeneous
network with dozens of hops end-to-end and bursty traffic ... there is
relatively high probability of periodic congestion. dropping a packet
and then restarting slow-start ... can enormously cut throughput.

approx. same time that slow-start was presented in IETF meeting ...
there was also acm sigcomm meeting with a couple papers of interest
... one showed how slow-start was non-stable in large, heterogeneous,
real-world bursty network. I've periodically pointed at that rate-based
requires at least some rudimentary system timing facilities ... and in
this time period, tcp/ip stacks were being deployed on low-level
platforms with insufficient timer support ... somewhat accounting for
being forced to fallback to slow-start.

there have been some recent papers claiming that a rate-based tcp
running over 56kbit/sec dial-up link can have higher end-to-end
throughput than standard slow-start tcp running over 1.5mbit/sec (given
various congestion scenarios) ... in any case, the legacy justification
for slow-start is long gone.

another interesting paper from the same acm sigcomm meeting was about
ethernet throughput. it showed a typical 30 station ethernet lan with
all stations having a low-level device driver app constantly
transmitting minimum sized ethernet packets ... and the effective
throughput dropping off to 8mbits/sec. (which is higher effective
throughput than 16mbit/sec token-ring)

this was in the period that the 16mbit token-ring people were publishing
lots of FUD, comparing to some ridiculously low ethernet throughput.
One of my conjectures for the way they came up with the numbers was that
they used the very early ethernet prototype that ran at 3mbits/sec (not
10mbits/sec) and didn't support listen-before-transmit (10mbit ethernet
with csma/cd standard had significant better throughput than earliest
ethernet prototype).

the new almaden ibm research bldg had been extensively wired with CAT5
anticipating 16mbit/sec token-ring ... but they found running 10mbit/sec
ethernet on CAT5 had both higher effective aggregate LAN throughput as
well as lower message latency.

--
virtualization experience starting Jan1968, online at home since Mar1970

SNA vs TCP/IP

lynn@GARLIC.COM (Anne & Lynn Wheeler) writes:
a big issue with tcp throughput is slow-start as mechanism for
congestion control/avoidance ... aka in enormously large heterogeneous
network with dozens of hops end-to-end and bursty traffic ... there is
relatively high probability of periodic congestion. dropping a packet
and then restarting slow-start ... can enormously cut throughput.

aka ... doesn't distinquish between missing packet because of
transmission error and dropped packet because of congestion.

it is possible to show that dynamic adaptive rate-based pacing can be
stable and maintain higher throughput rates than slow-start in the face
of packet drops.

for some topic drift ... in hsdt there were some fiber-links with 10**-9
bit-error-rate with 15/16s reed-solomon FEC ... which resulted in
effective 10**-15 bit error rate ... approx. the same as ibm mainframe
channels of the period.
http://www.garlic.com/~lynn/subnetwork.html#hsdt

there was also a gimmick for selective resend (on packet drop) ... where
instead of retransmission of the same packet .... would transmit the 1/2
rate Viterbi FEC ... the receiving end could have both the original
packet in error and the Viterbi FEC packet in error (despite 15/16
reed-solomon FEC) ... and still be able to reconstitute the original
data. if error rate continued at high enough level ... transition to
sending the 1/2 rate Viterbi as part of original transmission ... cuts
effective rate in half ... but during periods of extremely hostile
transmission ... packets still get through.

--
virtualization experience starting Jan1968, online at home since Mar1970

SNA vs TCP/IP

lynn@GARLIC.COM (Anne & Lynn Wheeler) writes:
for some topic drift ... in hsdt there were some fiber-links with 10**-9
bit-error-rate with 15/16s reed-solomon FEC ... which resulted in
effective 10**-15 bit error rate ... approx. the same as ibm mainframe
channels of the period.

aka this was just under 30yrs ago ... hsdt was fortunate to have an
engineer that had been one of reed's graduate students and had done
lot of the work on reed-solomon ... hsdt was also working with
cyclotomics up in berkeley that had done a lot of reed-solomon error
correcting work, including for the cdrom standard (i would joke that i
could get much better technology in $300 cdrom player than i could get
in a $10,000 computer high-speed modem)

past cyclotomics talk at stanford "The Impact of Error-control on
Systems Design"

even now, for gigabit link & 10**-15 bit error rate ... a packet being
dropped for uncorrected bit error would be every couple weeks ...
much more likely to have dropped packets because of congestion at
intermediate nodes.

SNA vs TCP/IP

lynn@GARLIC.COM (Anne & Lynn Wheeler) writes:
for some topic drift ... in hsdt there were some fiber-links with 10**-9
bit-error-rate with 15/16s reed-solomon FEC ... which resulted in
effective 10**-15 bit error rate ... approx. the same as ibm mainframe
channels of the period.

from above:
Reed-Solomon might well be the most ubiquitously implemented algorithm:
Barcodes use it; every CD, DVD, RAID6, and digital tape device uses it;
so do digital TV and DSL. Even in deep space, Reed-Solomon toils
away. Here's how it works its magic

from above
As Senator Elizabeth Warren pointed out, in a letter to the White House:

I have heard the argument that transparency would undermine the
administration's policy to complete the trade agreement because public
opposition would be significant. If transparency would lead to
widespread public opposition to a trade agreement, then that trade
agreement should not be the policy of the United States. I believe in
transparency and democracy and I think the US Trade Representative
should too.

hancock4 writes:
If someone was originally granted IP for say 50 years and then
suddenly that patent is ordered cutback to ten years and thus expired,
that does seem to be confiscation of private property without
compensation, and yes, communism.

the claim was that the patent system was originally created to encourage
individual innovation and protect the individual from large institutions
focused on preserving status quo (with individual innovation being major
factor in increasing the standard of living).

however for some time now, large institutions have inverted the original
purpose of the patent system as way of preserving the status
quo. patents that exist for more than a decade or two would stray into
the area of preserving status quo and would run counter to the original
purpose of the patent system (encouraging constant individual
innovation).

Anne & Lynn Wheeler <lynn@garlic.com> writes:
the claim was that the patent system was originally created to encourage
individual innovation and protect the individual from large institutions
focused on preserving status quo (with individual innovation being major
factor in increasing the standard of living).

however for some time now, large institutions have inverted the original
purpose of the patent system as way of preserving the status
quo. patents that exist for more than a decade or two would stray into
the area of preserving status quo and would run counter to the original
purpose of the patent system (encouraging constant individual
innovation).

from above:
Patents are teachings, true recipes for enterprise. By law they are
fully open documents that exist for the purpose of enabling innovation

...
One is that lawyers have learned to hide the ball inside intentionally
opaque patents. The standards of patent-granting agencies tend to range
from mediocre to execrable, and from incomprehensible to inconsistent.
The whole process is painfully contentious, litigious, expensive and
fraught.

... snip ...

enormous piles of money are spread around capital hill by special
interests for the purpose of preserving the status quo. the recent case
of the "open trade treaty" (TPP) ... is similar ... and has
been far from "open"
http://www.garlic.com/~lynn/2013n.html#35 'Free Unix!': The world-changing proclamationmade30yearsagotoday

--
virtualization experience starting Jan1968, online at home since Mar1970

As per above ... mainframe channel and device is just industry
standard technology that have simulation layers for mainframe legacy
channel and disk operation (which will be slower than systems using
the native hardware directly w/o the legacy emulation layer).

IBM FICON is a protocol layer on industry standard fibre-channel
standard. In 1988, I had been asked to help LLNL standardiize some
serial stuff they had ... which quickly morphed into FCS
(fibre-channel standard). Later some POK channel people got involved
and defined a really heavy weight protocol layer on top of FCS (FICON)
that drastically cuts the native FCS throughput.

Recent *peak* z196 throughput benchmark got 2M IOPS using 104 FICON
and 14 SAPs. SAPs documentation is that 14 SAPs are capable of 2.2M
SSCH/secs all running 100% busy ... but recommendations are to keep
busy at 70% or less (1.5M SSCH/secs).

By comparison a recent FCS announced for e5-2600 blade claims over 1M
IOPS i.e. two such FCS would have higher native throughput than the
peak z196 benchmark got using 104 FICON (which is a protocol layer on
top of 104 FCS that drastically cuts the native FCS throughput).

Outboard of FCS, all the disk subsystems are effectively identical
... there hasn't been any real CKD DASD manufactured for decades
... just using industry standard disks with a protocol layer that
simulates CKD operation.

Max configured z196 is rated at 50BIPS and goes for $28M. IBM
financials claim that IBM mainframe group earns total $6.25 for every
processor dollar ... or a customer with $28M max configured z196 is
paying IBM on the avg. $175M.

By comparison, e5-2600 blades have processor rating of between
400BIPS-600BIPS (ten times that of max. configured z196) and IBM has
base list price of $1815.

the common cloud instances tend to have near twice the integer
processing power (BIPS) of their floating processing power
(GFLOPS). z196 at 50BIPS and $175M works out to $3.5M/BIPS while
e5-2600 at 500BIPS and $1815 works out to around $3.5/BIPS (factor of
million times difference). Also cloud operators claim they build their
own servers for 1/3rd the price of brand name vendors (close to
$1/BIPS) ... and server chip manufacturers claim now shipping more
server chips directly to cloud operators than to brand name server
vendors. Large number of the cloud mega-datacenters around the world
are claiming well over million cores per mega-datacenter.

and latest news for next linux kernel .... including optimized driver
for native disk SSD (actually match the driver operation to the way
the device actually operates, not possible in the CKD emulation
paradigm) with reported improvements of 3.5 to 10times greater IOPS
and 10 to 38x reduction in latency
http://www.phoronix.com/scan.php?page=news_item&px=MTUxNTk

from above:
He became convinced exchanges were providing such an edge after he
says he was offered one himself when he ran a high-speed trading firm
-- a way to place orders that can be filled ahead of others placed
earlier. The key: a kind of order called "Hide Not Slide".

from above:
After Thursday's leak of the intellectual property chapter it is obvious
why the USTR and the Obama administration have insisted on secrecy. From
this text it appears that the U.S. administration is negotiating for
intellectual property provisions that it knows it could not achieve
through an open democratic process. For example, it includes provisions
similar to those of the failed Stop Online Piracy Act (SOPA), and
Protect Intellectual Property Act (PIPA), and the Anti-Counterfeiting
Trade Agreement (ACTA) that the European Parliament ultimately
rejected. The United States appears to be using the non-transparent
Trans-Pacific Partnership negotiations as a deliberate end run around
Congress on intellectual property, to achieve a presumably unpopular set
of policy goals

... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

that was the argument that the POK favorite son operating system people
used when attempt was made to add comapre-and-swap to 370.

charlie had invented compare-and-swap while doing fine grain
multiprocessing locking for cp67 at the cambridge science center.
attempts to add it to 370 was met by opposition from the POK favorite
son operating system (having used single TS spin-lock for entering
os/360 360/65mp kernel). The owners of the 370 architecture then said in
order to justify compare-and-swap for 370, non-multiprocessor use cases
would be needed.

thus was born the non-multiprocessor specific use cases that still
appear in the principle of operations. TS use case was multiprocessor
set a lock serizliation followed by some operation (critical section)
followed by clearing the lock. compare-and-swap is single serialized
non-interruptable, atomic operation. The compare-and-swap use cases
include non-kernel, interruptable, multi-threaded (not necessarily
multiprocessor) operation performing atomic serialized operations w/o
the overhead of making kernel call to perform serialized operation.
misc. past posts mentioning multiprocessing and/or compare-and-swap
http://www.garlic.com/~lynn/subtopic.html#smp

compare-and-swap was fairly quickly adopted by large multi-threaded
applications like high throughput DBMS systems. compare-and-swap was so
useful, numerous other hardware platforms adopted it also (or
instructions with similar atomic semantics).

ibm's 801/risc architecture was originally developed in the period
around Future System (failed new machine architecture that was going to
completely replace 360/370) ... and i've frequently claimed that there
was a lot of 801 objectives to the extreme opposite of the FS
complexity. One of the issues not to have cache consistency ... to
avoid the enormous performance penalty paid by FS mutliprocessor (and
even the extreme throughput penalty paid for 370 strong memory model
multiprocessor cache consistency). misc. past posts mentioning future
system
http://www.garlic.com/~lynn/submain.html#futuresys

No cache-consistency pretty much ruled out multiprocessor operation
... as well as philosophy of all instructions needed to complete in
single cycle ... ruled out compare-and-swap instruction.

however, the lack of compare-and-swap instruction ... put the RS/6000 at
severe throughput disadvantage with open system RDBMS benchmarks
compared to other platforms (open system RDBMS had fall-back to kernal
call locking for few hardware platforms that didn't have
compare-and-swap semantics).

fairly early, compare-and-swap instruction emulation was added to the
rs/6000 AIX system call FLIH ... within a couple instructions of entry
to system call FLIH ... there was special case for compare-and-swap
emulation that then immediately returned to application. While it wasn't
useful for real multiprocessor operation, it did achieve the objective
not being interruptable while emulation processing was in progress.

from above:
In summary, what can we conclude from these data? Canada, with by far
the most sole-country proposals, seems like it is up to something.
Perhaps more important, the United States and Japan are relatively
isolated in their negotiating positions. This could bode poorly for the
United States as it seeks to shape the TPP to its liking.

hancock4 writes:
If a corporation violated the law, then those humans responsible for
the violation ought to be sent to jail or fined. The corporation
could be fined.

But thse punishments should be decided by crminal court in a legal
environment properly able to assess the situation. Civil court under
the present system doesn't work that way.

For instance, many tort claims are handled out of court. If a
corporation was criminally negligent, it beats the rap. Other times
the corporation is fined for IMHO emotional issues rather than true
legal issues. For instance, smoking was publicly known to be
dangerous by the mid 1960s. Long after that people have sued and won
big awards against the tobacco companies; I think that is wrong
because the smokers chose to retain bad behavior long after it was
known to be a bad idea. I've seen old tobacco ads and I don't buy the
argument they were illegal in their day.

then there is the too big to fail ... that have been caught money
laundering for drug cartels and terrorists ... for other institutions,
the prescribed treatment ist shutdown and the executives go to jail
... however for the too big to fail ... they get their hand slapped
and asked please to not do it anymore ... largely contributing to
references to too big to prosecute and too big to jail (overlap with
moral hazard where they feel empowered to do anything they want since
there is little likelyhood of any serious consequences).
http://www.garlic.com/~lynn/submisc.html#too-big-to-failposts mentioning money laundering
http://www.garlic.com/~lynn/submisc.html#money.laundering

and the spin-doctors are hard at work. the JPM WaMu fine of $13B is all
over the business news about how unfair it is. however, Daily Show had
clip from 2008 where Jamie Dimon was saying that even with the $29B they
were putting aside for WaMu penalties and fines ... the WaMu deal was
still a gold mine ... so it was viewed as extremely favorable deal
... up to and including $29B fine. Anything less is additional cash in
their pocket ... so employing spin-doctors trying to help keep it well
under $29B is significant ROI (except for all the suffering of the
victims).

--
virtualization experience starting Jan1968, online at home since Mar1970

Shmuel (Seymour J.) Metz <spamtrap@library.lspace.org.invalid> writes:
First on a modified 360/40 with CP-40 in the 1960's. PR/SM came out
much later, and used a modified CP to implement logical partitions for
those who didn't need the full functionality of VM.

pr/sm was originally done for 3090 in reaction to amdahl's hypervisor
... built by programming in "macrocode mode" ... pr/sm was significantly
more difficult since it had to be done in native 3090 horizontal
microcode.

slac hosted the monthly BAYBUNCH user group meetings ... and I gave a
presentation on what went into ECPS (originally for 138/148) ... past
ref
http://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist

offline the Amdahl people asked for more information ... and related
what they were in the process of doing ... and some of the results along
the way.

the 138/148 was vertical microcode ... making it much more similar to
native 370 (than 3090 horizontal microcode, and by comparison making
ECPS implementation simpler).

with the failure of future system, there was mad rush to get stuff back
into the 370 product pipelines. Part of that, POK managed to convince
corporate to kill off the (virtual machine) vm370 product, shutdown the
burlington mall development group and transfer all the people to POK (or
otherwise MVS/XA wouldn't meet its ship schedule 7-8 yrs later).
Endicott managed to save the vm370 product mission ... but had to
reconstitute a development group from scratch.

part of the POK activity (for the vm370 developers) was the vmtool
... an virtual machine internal-only development platform ... never
intended for product release. 3081 SIE was done in support of the VMTOOL
... again never intended for release to customers. 3081 SIE design point
was dispatching long-running (MVS) guests ... and so was infrequently
executed ... one of the issues was 3081 microcode space was limited
... and invokation of SIE also required "paging" in pieces of the
microcode for execution.

VMTOOL was eventually released to customers as VM/SF (supposedly
targeted for customer migration aid from MVS to MVS/XA) ... but as
expected it had significant performance issues ... when executed
frequently in a interactive CMS oriented environment. Internally there
was also a version of vm370 that was done supporting XA-mode ... that
was far superior to the VMTOOL made avaiable to customers. This resulted
in internal politics between POK (vmtool) and Endicott (vm370) ... that
POK won ... suppressing the internal vm370 XA support.
http://www.garlic.com/~lynn/2007u.html#9 Open z architecture and Linux questions
http://www.garlic.com/~lynn/2012g.html#19 Co-existance of z/OS and z/VM on same DASD farm
http://www.garlic.com/~lynn/2012o.html#35 Regarding Time Sharing

Seebs <usenet-nospam@seebs.net> writes:
Well, yeah. Health coverage hasn't been even a little like "insurance" in
the sense of fire or flood insurance in decades.

there are periodic references to over half of national flood insurance
going to the same people year after year ... even after congress
passed law in eighties that flood insurance wouldn't be available to
people who rebuild on the same flood plain. the counter by some
congressmen (from that state) was the state deserved the funds as form
of federal economic assistance. this is similar to congressional
comments about mass. deserve the enormous excess funding for the "big
dig" and possibly corrupt 90% skimmed off (effectively another form of
subsidies to the 1%)

as mentioned upthread, I was on XTP technical advisory board which
would do reliable transport in minimum of 3packets (compared to
minimum of 7packets for TCP). I also wrote the specification for XTP
dynamic rate-based transmission for congestion management ... and
several times more recently I've written about doing HTTPS/TLS
piggy-backed on XTP reliable 3packet exchange.

latest from today: Google trumpets Chrome's SPDY gains Although Google
online services have adopted the protocol, 8 of the top 10 U.S. Web
properties have not
http://www.networkworld.com/news/2013/121613-judge-pulls-no-punches-in-276982.html

scott@slp53.sl.home (Scott Lurndal) writes:
Burroughs & IBM. Burroughs could build 4x4 systems (four
loosely-coupled, four-tightly-coupled (shared memory) processor systems)
in the medium systems range, and IIRC some of the early A series (A15?) could
do four or eight processors.

In the 90's unisys built the OPUS (128 Pentium Pros) massively parallel
system (shipped in 95) using the Intel Paragon supercomputer backplane.
Meanwhile, Sequent was inventing ccNUMA.

LLNL had some serial technology and in 1988 I was asked if I could help
them standardize ... which eventually morphs into fibre channel standard (FCS)
IBM later does a heavy weight layer that they called FICON (simulating
old mainframe channel) that is significant lower throughput than native
FCS throughput
https://en.wikipedia.org/wiki/Fibre_Channel

sci include a cache coherence protocol that is used by several vendors
for smp scaleup (in some cases using the same sci chips from the same
sci chip vendor). convex does 128-way HP snake chips (two chip board
shared cache with sci 64-way cache coherence), sequent and data general
does 256-way intel chip (four chip board shared cache with sci 64-way
cache coherence). other vendors also do SCI smp scaleup. I know some
number of vendors (including unisys) resold rebranded sequents under the
own logos.

and then leaves and does his own supercomputer system (that included
heavy funding from ibm). after that implodes ... and he then is
CTO at sequent. We do some consulting for Steve (before sequent is
bought by IBM). Other trivia, sequent people claimed to have done most
of the windows/NT smp scaleup ... for up to 32-way (as alternative to
their unix dynix system).

we then get asked into having several meetings with the guy running
superdome (he had been at cray, then did stint at IBM before going to
hp). he claims that 32-way has become much more technology "sweet spot"
(than sci)
https://en.wikipedia.org/wiki/HP_Superdome

news organizations (especially in the dc area) will periodically
refer to congress as kabuki theater (what you see is pure facade)
... check 1603-1629 period and synonym for kabuki
https://en.wikipedia.org/wiki/Kabuki

from above:
To recap, the Obama Administration is negotiating behind the scenes --
in a "trade" deal -- that if you buy a cellphone you should neither be
able to use it with other carriers nor use software on it not approved
by the carrier. And if you do either of those things, you could face
fines and jail time.

... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

Bridgestone Sues IBM For $600 Million Over Allegedly 'Defective' System That Plunged The Company Into 'Chaos'

from above:
It said the computer system built so far was unreliable and full of
bugs (had "a higher number of software defects than industry norms.")

Plus, it blamed IBM's revolving-door workforce. The initial project
manager and the top executive left in 2009 and IBM preceded to have
638 people work on the system, rotating most of them off in less than
a year.

I guess I was constantly bucking the system ... I was constantly being
told I had no career in the company and could expect no
promotions. For example, during the FS period ... which was going to
completely replace 360/370, I continued to work on 370 and would
periodically ridicule the FS activities ... which wasn't exactly
career enhancing ... then FS imploded (potential to have taken down
the company with it). Something similar was repeated several times. My
exit executive interview included the comment that they could have
forgiven me for being wrong ... but they were never going to forgive
me for being right.

and also references that there are lots of rewards if you go along
... but the system can be ruthless if you attempt to buck it.

note TIME periodically brings the article back out of behind the
paywall ... also linkedin swizzles the actual URL that you click on
(i.e. what you see in the post is not the actual URL linkedin sends to
the web) ... linkedin modified URL can confuse the wayback machine
... so it may be necessary to copy/paste rather than actually click

... another line they would periodically use was the best i could hope
for is to not be fired and allowed to do "it" again (whatever new
thing at the moment).

and from the annals of truth is stranger than fiction ... 1992 was
paid lump sum to leave ... transition was technically extended unpaid
leave of absence ... but not allowed to come back. after my last day,
i arrive home to find a letter that says that I've been promoted
effective the first day of my leave of absence

Ferguson & Morris in "Computer Wars: The Post-IBM World" discuss the
effects of FS failure
... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with sycophancy
and make no waves under Opel and Akers. It's claimed that
thereafter, IBM lived in the shadow of defeat

... and:
But because of the heavy investment of face by the top management, F/S
took years to kill, although its wrongheadedness was obvious from the
very outset. "For the first time, during F/S, outspoken criticism
became politically dangerous," recalls a former top executive.

... snip ...

I was blamed for online computer conferencing on the internal network
in the late 70s and early 80s on the internal network (larger than the
arpanet/internet from just about the beginning until sometime late '85
or early '86) ... as well as tandem memos ... from IBM Jargon:

Tandem Memos - n. Something constructive but hard to control; a fresh
of breath air (sic). That's another Tandem Memos. A phrase to worry
middle management. It refers to the computer-based conference (widely
distributed in 1981) in which many technical personnel expressed
dissatisfaction with the tools available to them at that time, and
also constructively criticised the way products were are
developed. The memos are required reading for anyone with a serious
interest in quality products. If you have not seen the memos, try
reading the November 1981 Datamation summary.

...

folklore is that when the executive committee was informed of online
computer conferencing (and the internal network), 5of6 wanted to
immediately fire me. Cooler heads prevailed ... possibly because there
was claim that something like 27,000 employees following tandem
memos.

A lot of what I was able to do was in spite of a lot of top management
... I was allowed to wander around a lot of the company working on
problems ... but awareness of it rarely went above the first or 2nd
line management. When upper management became aware that I had been
providing customized/enhanced operating system for (world-wide
sales&marketing support) HONE system for 15yrs ... they wanted to put
a stop to it (since I was only a single person and there was no
official executive agreements authorizing/permitting it).

'Free Unix!': The world-changing proclamationmade30yearsagotoday

Shmuel (Seymour J.) Metz <spamtrap@library.lspace.org.invalid> writes:
The Devil is in the details. You need to do things in a fashion that
punishes the corporation without injuring their creditors,
employees[1] or suppliers. I'd allow collecting pay and benefits from
their accounts, paying off loans and paying suppliers, but not
ordering new supplies or allowing new work, although I would continue
paying the employees for the work that they would otherwise have been
doing.

a lot of the executives of the too-big-to-fail were leveraging their
corporations to do things for their personal gain ... and their personal
gain was so enormous that it over shadowed any concern they might have
had about what it might do to their institutions, the economy and/or the
country (i.e. collateral damage). a lot of the attention on institution
is misdirection and obfuscation away from the individuals.

stories were that when JPM took on WaMu they set aside $29B for legal
liabilities ... but still felt that it was a gold mine. Tne current
whinning about $13B is facade ... since they still get to pocket the
rest of the $29B. Also if spinning public news might cut it by another
billion or two ... that would be enormous ROI for the news spinning
effort.

there has been some amount about computer game theory playing a big part
in wallstreet players actions over the past couple decades ... looking
at fraud as just another kind of investment ... what is the worst case
downside ... and how to mitigate that risk ... versus the personal
upside.

rebuild 1403 printer chain

Jon Elson <jmelson@wustl.edu> writes:
Back in the 360 days, the controllers needed to be pretty simple, and
the design of the devices was made such that the controllers didn't
get too complex.

one of the 360 channel trade-offs was that memory was very expensive and
scarce ... so everything relied on processor memory ... with the i/o
infrastructure constantly referencing processor memory.

this really shows up in dasd ckd operations ... rather than having
filesystem structure cached in memory ... it was outboard on disk
... and system made use of search operations to try and find things
... furthermore the argument for the search operation was in processor
memory and had to be refetched for every compare operation ... for
instance if searching on key-equal ... every time the disk encountered a
key on disk, the search key argument was refetched from processor memory
for every compare. this was technology trade-off that conserved
electronic storage at the expense of enormous amounts of i/o and
bandwidth resources.

at least by the mid-70s, this trade-off was inverting ... with
electronic memory become significantly more plentiful and i/o and
bandwidth resources becoming bottlenecked resource.

in late 70s, ibm introduced fixed-block-architecture (FBA) for entry and
mid-range ... but continued to support CKD dasd because of MVS operating
system inability to move off the paradigm. by the late 80s, nobody was
making ckd dasd anymore ... and it was necessary to have hardware
simulation for the CKD dasd operations on industry standard fixed-block
disks (something that continues now to this day).
http://www.garlic.com/~lynn/submain.html#dasd

I had offered to provide MVS FBA support ... but was told that I needed
a $26M business case to cover training and new documentation ... and I
couldn't use total life time cost savings ... only incremental new disk
sales ... basically couple hundred million in new disks sold that
otherwise wouldn't be sold w/o FBA support. Then I was told that
customers were buying disks as fast as they could be made ... so it
would be possible to show any incremental disk sales.

in 1980, the ibm santa teresa lab was bursting at the seams and they
decided to move 300 people from IMS group to offsite bldg. They had been
offerred "remote 3270" terminal support back into the STL datacenter
... but found the human factors of remote 3270 totally unacceptable. I
was con'ed into doing channel extender support for the group
... basically a channel emulation box was placed at the remote site
... to which "local" channel attached 3270 controllers were connected.
Channel extender support involved high efficient full-duplex network
operations that downloaded channel programs to the remote box ... and
the emulated channel protocol chatter only ran between the channel
emulation box and the controller ... significantly offloading a lot of
activity off the real local channel. This had the effect of people at
the remote site not seeing any difference between real local 3270
operation in STL and simulated local 3270 operation at the remote
bldg. It also had the side effoect of improving the throughput of the
affected systems in the STL datacenter ... since it improved their real
channel operation and reduced total real channel busy for the 3270
operations. some HSDT posts including discussion of getting
channel speed throughput over what would be considered network links
http://www.garlic.com/~lynn/subnetwork.html#hsdt

for instance the original mainframe tcp/ip product was done in vs/pascal
but had some performance issues getting 44kbytes/sec using nearly whole
3090 processor. I did the changes for rfc1044 and in some tuning tests
at cray research got channel speed throughput between cray and 4341
using only modest amount of 4341 cpu (possibly 500 times improvement
in bytes moved per instruction executed)
http://www.garlic.com/~lynn/subnetwork.html#1044

part of fibre channel standard is download of i/o programs to the remote
end ... significantly cutting chatter and end-to-end protocol latency
over the link.

some POK channel engineers then become involved in FCS and define a
heavy-duty protocol layer on top of FCS that retains a lot of
end-to-end protocol chatter and latency that drastically cuts
the throughput compared to native FCS throughput .... which eventually
morphs into something called FICON
http://www.garlic.com/~lynn/submisc.html#ficon

recent z196 peak i/o benchmark used 104 FICON (running on 104 FCS) to
achieve 2M IOPS. by comparison, there was recent FCS announced for
e5-2600 claiming over million IOPS (for sincle native FCS, aka two such
native with higher throughput than the z196 peak i/o benchmark getting
2M IOPS)

--
virtualization experience starting Jan1968, online at home since Mar1970

from above:
The main reason is rooted in the Pentagon's continuing reliance on a
tangle of thousands of disparate, obsolete, largely incompatible
accounting and business-management systems. Many of these systems were
built in the 1970s and use outmoded computer languages such as COBOL
on old mainframes. They use antiquated file systems that make it
difficult or impossible to search for data. Much of their data is
corrupted and erroneous.

rebuild 1403 printer chain

"Joe Morris" <j.c.morris@verizon.net> writes:
Without folding, if you attempted to print a mixed-case data stream your
print speed went down the toilet. Any print line containing one or more
characters not mapped to at least one of the 240 glyphs on a print train
would not be considered complete until the print train had passed its home
position twice (thus guaranteeing that every print position on the printer
had been exposed to every glyph on the train). Additionally, unless
suppressed this situation resulted in a unit check with "data check" sense,
causing the operating system to go through error recovery.

for more trivia & topic drift ... because unit check status was in the
channel ... the channel went into contingent connection (no other
operations could be performed) until sense information had been
retrieved.

unit check also had to be associated with specific operation ... I got
sucked into resolving a problem that the 3880 disk control unit people
were causing. 3880 used much slower processor for control opertaions
than previous 3830 ... but there was requirement that 3880 had to be
within 5-10% performance of the 3830. One of the problems was that the
3880 processor was really, really slow cleaning everything up at the end
of i/o program ... and so to make it look faster ... they tried
presenting ending status interrupt early ... before it finished
everything (hoping that it could finish before operating system latency
got around to trying next operation).

when I was first wandered around the disk engineering & developing bldgs
... they had lots of 370s in their machine rooms running single device,
stand-alone testing ... machines pre-scheduled 7x24 around the clock.
at one point they had tried running MVS for concurrent testing ... but
found it had 15min MTBF in that environment (requiring manual reboot).
I offerred to rewrite I/O supervisor making it bullet proof and never
fail ... so they can do on-demand, anytime, concurrent testing ... which
significantly improved productivity.

since even concurrent testing of all available testcells used only a
few percent of the machine ... they also setup a 3033 to also provide
online interactive service ... using 16 surplus 3330 disks and 3830
controller. one monday morning I got a call that their online
interactive performance had gone into the can and wanted to know what
I had done over the weekend. Analysis eventually showed up that they
had replaced the 3830 with engineering 3880 ... and the controller
overhead (throughput latency problem) was much worse than anybody
modeled.

There was also problem with my rewrite of I/O supervisor ... part of the
rewrite was to make the device redrive pathlength as short as possible.
370/xa SSCH was being justified because operating systems left a lot of
device idle time between the end of an operation and the redrive of the
next pending queued operation. I wanted to show nearly all of the SSCH
justification was based on really terrible operating system software
and that efficient implementation could come very close to architectured
SSCH (which would allow a separate dedicated processor for redrive).

it turns out the redrive code was hitting the 3880 while it was still
busy cleaning up the previous operation ... and controller busy
condition required presenting SM+BUSY (operation not started). Then
later, since it had presented SM+BUSY ... it had to present a separate
CUE interrupt. Besides significantly increasing channel busy and
operations now taking much longer (than with 3830) ... the operating
system processing had to try starting each operation twice and getting
twice the number of interrupts. Fortunately this was six months before
3880 ship to customers and there was some additional problem masking
they could do (but couldn't totally eliminate the problem).

it also turns out 3880 presenting ending status early precipitated a
sperate problem. There are certain kinds of error that could be
identified with cleanup ... that aren't directly associated with
previous data transfer. The 3880 developers then decided that they would
present unsolicited unit check interrupt when this happens. Turns out I
pointed out that this is violation of channel architecture ... every
unit check hass to be associated with an operation. This eventually
escalated into conference calls with POK channel engineers with me
sitting with the disk engineers mediated. It was eventually resolved
... but afterwards they wanted me to sit in on all conference calls with
POK channel engineers.

The resolution to the unsolicited unit check was to save it until the
next time an operation was started on that device ... and present cc=1,
immediate status, operation not started, channel status stored ... with
the unit check set then. There is some flaky stuff that they had to
played with ... since things are sort of in contingent connection (not
allowing things to start) ... but the unit check hadn't actually been
presented ... so you don't know which device address you have to do a
sense operation for (and you can't present cc1, unit check for device or
controller on the same channel unrelated to the unit check).

as i've mentioned before while they eventually managed to mask much of
the 3880 problems to minimize additional operating systeme overhead
... they couldn't mask the significant increase in channel busy.

this shows up in 3090 design ... it was configured with number of
channels assuming 3830 busy. when they found out how bad the 3880
channel busy overhead was ... it required them to significantly increase
the number of channels (in order to reach IOPS targets/throughput).
This required additional TCM and increasing 3090 manufacturing
costs. There were semi-humorous references to the POK 3090 group billing
the 3880 group with the increase in 3090 manufacturing.

Marketing respun the significant increase in the number of 3090 channels
as indication of the enormous i/o capacity of the machine (as opposed to
compensating for the enormous increase in 3880 channel busy
overhead). this marketing spin continues to this day with FICON ... as
mentioned in previous posts
http://www.garlic.com/~lynn/submisc.html#ficon

--
virtualization experience starting Jan1968, online at home since Mar1970

note that endineers didn't want the switch from (fast) horizontal
microcode engine in the 3830 to (much slower) vertical microcode
(JIB-prime) engine in the 3880 ... they attribute the decision to a new
(non-engineer) promoted to head of the division (from outside the
organization) wanting to save costs in manufacturing of the 3880.

this was discussed rather bitterly in "tandem memos" as well as the rise
of MBAs destroying US corporate culture.

I had written up and distributed some amount of the 3880 troubles
... which apparently corporate hdqtrs objected to. They sent out the
corporate hdqtrs executive responsible for employee satisfaction to
talk to me. He was somewhat setup ... briefed that I was from research
and apparently obviously was just spouting off about stuff I knew
nothing about. I was setup being told that they wanted to hear about
how to improve the company ... but he had copy of what I had written
with all the objectionable parts highlighted and a script to brow beat
on each point. Their scenario disolved since I had been deeply
involved in the whole thing as well as helping diagnose and mitigate
the problems.

Peter Flass <Peter_Flass@Yahoo.com> writes:
Once you get past one you've got all the problems. Naturally there
are a lot of things you can do on a two(ish) processor system that
won't scale up to 128 processors, but it's all just details - unless
you're windows (95 maybe?) and just decide to put locks around huge
chunks of the GUI code because you can't be bothered making it
reentrant.

however, his insistance on lack of 801 cache consistency was also
possible reaction to the high overhead cache consistency for 370
multiprocessor (it wasn't until somerset and aim ... where some of the
801 processor stuff was mixed with motorola's 88k multiprocessor and
cache consistency ... for power/pc).

370 2-way multiprocessor cut the processor machine cycle by 10% to just
support cross-cache invalidation ... aka base 2-way was 1.8 the
processing of single processor (aka 2*.9) ... processing further slowed
down further with the handling of cross-cache invalidates (i.e. the
processor slowdown was just for being able to accept the signals).

3081 (2-way) was originally going to be a multiprocessor only "370"
machine ... but because TPF (airline control program) didn't have
multiprocessor support ... they eventually came out with 3083 (a 3081
with one processor removed) because there was the risk of all the TPF
customers moving to non-IBM 370 clone vendors (that had faster single
processor 370s). 3083 single processor was clocked faster than 3081
processor since the cross-cache slowdown was removed.

going to 3084 (two 3081s in a 4-way) was major effort ... since the
slow-down had to account for invalidation signals coming to every cache
from three other machines (rather than single other machine).

there was also a lot of operating system SMP sensitivity work. there was
already a lot of fine-grain locking throughput worked on ... however
there was scenario where different storage structures overlapped in the
same cache-line. when there were multiple different processors
concurrently using the different structures that overlapped in the same
cache line ... there could be an enormous about of cross-cache
invalidation and cache thrashing going on. in the 3084 time-frame there
was a lot of effort to redo kernel data structures so they were always
aligned on cache-line boundary and a multiple of cache-line size. The
claim was that just this change to kernel data structures resulted in
over five percent throughput improvement.

earlier in the late 70s, we had a 16-way 370 effort that relaxed the
memory consistency model (to reduce the cache overhead effects) that
co-opted spare time of the 3033 processor engineers ... who were charged
with q&d effort to remap 168-3 logic to some left over FS technology
... somewhat discussed here:
http://www.jfsowa.com/computer/memo125.htm Discussion of old FS evaluation

things were going great guns ... until somebody happened to mention to
the head of POK that it could be decades before the POK favorite son
operating system had effective 16-way support. That resulted in the head
of POK telling the 3033 processor engineers to get back to only working
on 3033 and stop being distracted ... and other people were invited to
never show their face in POK again.

HONE was originally created in the wake of the 23Jun1969 unbundling
announcement which started to charge for SE services, application
software, etc. Prior to unbundling junior and apprentice SEs would get
their training as part of large team onsite at customer
location. After unbundling, onsite at customer had to be charged
for.
http://www.garlic.com/~lynn/submain.html#unbundle

HONE was originally going to provide guest operating system "hands-on"
at branch offices in (CP67) virtual machines. However, science center
had also ported apl\360 to cms for cms\apl ... and there started to be
a lot of sales&marketing support applications developed in
cms\apl. This eventually comes to dominate all HONE activity and the
guest operating system operation withers away.
htt://www.garlic.com/~lynn/subtopic.html#hone

Trivia ... in the mid-70s US HONE was consolidated in datacenter in
silicon valley. Somebody else occupies that bldg now ... but it is
right next door to the former facebook hdqtr bldg (before facebook
bought the old SUN campus). In the late 70s, it was possibly the
largest single-system image (with load-balancing and fall-over)
mainframe complex in the world (lots of loosely-coupled large
multiprocessor mainframes). After an earthquake in cal. ... it was
replicated first in Dallas and then another in Boulder (with
coordinated operation across all three datacenters).

While an undergraduate in the 60s, I got sucked into helping Boeing
consolidate its computer business into Boeing Computer Services
(better monetize its computer investment). At the time, I thought
Boeing's renton datacenter was largest in the world ($200M-$300M in
large IBM mainframes). John Boyd's biographies reference that about
the same time, he was running "spook base" ... including it being a
$2.5B windfall for IBM (ten times renton datacenter) ... Boyd would
comment that "spook base" had the largest air conditioned bldg. in
that part of the world.
http://www.garlic.com/~lynn/subboyd.html

and x-over from some other IBMers discussion ... Stockman in "The
Great Deformation: The Corruption of Capitalism in America"
pg465/10014-17:
Total shareholder distributions, including dividends, amounted to $82
billion, or 122 percent, of net income over this five-year
period. Likewise, during the last five years IBM spent less on capital
investment than its depreciation and amortization charges, and also
shrank its constant dollar spending for research and development by
nearly 2 percent annually.

... snip ...

As HONE was being cloned around the world ... I would frequently be
asked to go along. First was when EMEA hdqtrs moved from westchester
to Paris ... and I was asked to go onsite for the HONE clone
supporting EMEA hdqtrs (disclaimer, I never had an official
relationship with any of the HONE operations, it was just one of my
hobbies ... as long as it didn't come to upper management attention, I
was allowed to have lots of hobbies). However, I will tell you ... in
the early 70s ... it took quite a bit of effort to figure out how to
read my email in the US from Paris.

as an aside ... much of my archives (triple replicated on different
tapes in the Almaden tape library) from science center days were lost
in mid-80s problem in the Almaden datacenter where random tapes were
being mounted as scratch.

the part about when MVS was tried in that environment having a 15min
MTBF ... I happened to include in an internal report on what went into
rewriting i/o supervisor to be bullet proof and never fail ... which
turned to bring the wrath of the MVS group down on my head (if they
could have figured out how, they would have gotten me fired).

bldg 14and15 tended to get one of the first engineering models for
disk testing. bldg 15 had gotten one of the first engineering 3033s
... and with real operating system had nearly full 3033 to play with
(even with all device testing running concurrently, rarely used more
than a percent or two of processor time).

other folklore (in part related to the recent anniversary in Dallas)
... not long after I joined IBM, IBM hired a new CSO that had been
head of presidential detail (expert in physical security) ... and
being one of the bright young computer security experts ... I got
tasked with going around with him ... a little of physical security
issues rubbing off. It turns out at the time of Dallas he had been
head of 3rd shift detail and had already moved on to the next city.

--
virtualization experience starting Jan1968, online at home since Mar1970

Bet Cloud Computing to Win

large cloud operators for the last decade or so have been claiming
they assemble their own blade servers for 1/3rd the cost of brand name
vendors. there are rumors that some of the brand name vendors are
doing side-line of building for some of the smaller cloud customers at
close to the cost of the large cloud operators. This possibly
contributed to news reports earlier this year that IBM was trying to
sell off its x86 server business.

as a consequence of the significant drop (commoditized) in system
costs ... other costs of operating their large megadatacenters
(hundreds of thousands of servera and millions of processors) have
come to be proportionally larger (power, cooling, administration,
maintenance, etc) ... contributing to the large cloud operators on the
bleeding edge of green computing. Various reports have the number of
people running large megadatacenter between 60-120 (7x24
coverage). Any of the large cloud megadatacenters have more total
aggregate processing power than all the mainframes in the world today.

ondemand characteristic of lots of the megadatacenter use has been
major motivation behind server chip manufacturers adding features
where chips drop to near zero power/cooling when idle ... but
instantly able to come up to full operation. news items that the
server chip manufacturers are now shipping more chips directly to the
large cloud operators than to the brand name vendors (and these chips
don't show up in the "vendor" server market numbers)

max. configured z196 with 80 processors is rated at 50BIPS and goes
for $28M. IBM financials says that the mainframe business earns total
of $6.25 for every dollar of processor sales ... aka $28M z196 results
in total of $175M ... or $3.5M/BIPS. The mainframe business accounts
for 25% of the earnings and 40% of the profit ... but mainframe
processor sales only accounts for 4% of earnings. Regardless of
declining/increasing ... mainframe business is still significant cash
cow for ibm.

IBM also has a base list price of $1815 for a e5-2600 blade which have
processor ratings between 400-600BIPS ... or around $3.5/BIPS (million
times less than z196) The large cloud operator claim of 1/3rd cost of
brand name vendors drops that to around $1/BIPS.

FICON is heavy-weight protocol layer that runs on top of fibre-channel
standard that drastically reduces throughput of native FCS. A recent
(single) FCS announcement for e5-2600 blade is claiming over million
IOPS (i.e. two such FCS would have higher native throughput than the
104 FICONs).

from the standpoint of lots of customers it was superset of SIE in that
it provided virtual machine with even less overhead and not even needing
a separate (software) operating system (although for the original
138/148 ECPS ... done in the period when POK was busily trying to
totally kill off vm370 ... Endicotts efforts to make virtual machines
part of every machine shipped was veto'ed by corporate hdqtrs).

however, from the operating system standpoint ... it took quite awhile
for SIE virtualization ... i.e. VM running in a PR/SM logical
partition initially wasn't able, in turn to, use SIE (since it was being
used by PR/SM).

--
virtualization experience starting Jan1968, online at home since Mar1970

torte reform, was 'Free Unix!' proclamationmade30yearsagotoday

Lon <lon.stowell@comcast.net> writes:
Maybe start like SNA with three layers, expand to five, then during
the OSI era, expand to 7 of which a couple are bogus IMNHO. Surprised
IBM didn't go for 8 layers just to top OSI in some manner other than
that nobody in their right mind used OSI.

note I was doing a lot with TCP/IP and the communication group was
giving me a really hard time ... including when the head of the
communication group came out with public strategic statement supporting
OSI.

z/OS is antique WAS: Aging Sysprogs = Aging Farmers

jwglists@GMAIL.COM (John Gilmore) writes:
The U.S. Customs Service defines an antique artefact as something that
is at least 50 years old, and I z/OS is identified with is antetype,
OS/360, it is or will shortly be an antique.

Now 'antique' and 'antiquated' are closely related etymologically; but
'antiquated' is pejorative. To antiquate is to make obsolete, and I
am not sure that z/OS is obsolete.

from above
OS/2 was plagued by delays and bureaucratic infighting. IBM rules
about confidentiality meant that some Microsoft employees were unable
to talk to other Microsoft employees without a legal translator
between them. IBM also insisted that Microsoft would get paid by the
company's standard contractor rates, which were calculated by kLOCs,"
or a thousand lines of code. As many programmers know, given two
routines that can accomplish the same feat, the one with fewer lines
of code is generally superior -- it will tend to use less CPU, take up
less RAM, and be easier to debug and maintain. But IBM insisted on the
kLOC methodology."

ACA (Obamacare) website problems--article

jmfbahciv <See.above@aol.com> writes:
when DEC was DEC, moving expenses were paid by DEC. Hiring a new guy
also included moving him/her, family and stuff. Later, tax laws
made these costs taxable as income; I don't remember if it was a
percentage of the amount or the entire amount.

change from tax free compensation ... it became ordinary income ... and
the person had to then file tax return showing deductable moving
expenses. there were cases where some previous tax free moving
compensation were not considered moving expenses ... that would be
"up-lifted" (to cover the taxes that had to be paid)

--
virtualization experience starting Jan1968, online at home since Mar1970

'Free Unix!': The world-changing proclamation made30yearsagotoday

Peter Flass <Peter_Flass@Yahoo.com> writes:
I don't know how *all* systems handle SMP, but in the IBM world it's
possible for one CPU to fail and the others take over. This is
different from some early non-SMP systems where the system that did
the I/O was determined at boot time (or via a hardware switch maybe?)
and was fixed.

360&370 (except for 360/67) was 2-way smp with only sharing memory.
channel & i/o wasn't shared ... both processors had dedicated
chanels. shared i/o was simulated by using twin-tail (connect to two
different channel) controllers ... where the connections were at the
same addresses ... however not all ibm controllers had twin-tail
connectivity ... so i/o couldn't be fully symmetrical. smp
characteristic was that machine could be dividied into two independent
single processor operation (but dividing system required shutting down
and restarting).

360/67 smp had independent "channel controller" ... so all processors
could address all channels.

3081 was called dyadic ... while it was two-processor ... there were
some number of common components ... so it couldn't be divided into two
independent running systems. however 3081 did support all processors
addressing all channels ... 3081 also had 31-bit addressing ... more than
24-bit not seen since 360/67 which supported 32-bit addressing.

--
virtualization experience starting Jan1968, online at home since Mar1970

'Free Unix!': The world-changing proclamation made30yearsagotoday

Peter Flass <Peter_Flass@Yahoo.com> writes:
Done, long ago. We had (I can't claim to be the one to set this up)
several 3880 controllers each connected to two channels on the
mainframe on the front end, and cross-connected to multiple strings of
3380s on the back end, so that there were at least two independent
paths to everything.

head of string (sort of like mini-controller) could connect to two
different 3880 controllers ... and each 3880 controller could have four
channel interfaces ... giving each drive up to eight channel paths. for
availability ... each 3880 controller could have two jib-prime
microprocessors (adding to lots of internal overhead and processing
latency).

the 3880 also had additional overhead for handling multiple
channel paths. as part of trying to compensate for the horrible
processing overhead and latency ... the jib-prime tried to keep/cache
information about the channel interface for the most recent i/o.

however, if the next i/o came from a different channel interface,
the cached information had to be discarded and bunch of new
information loaded ... which significanlty increased the i/o latency
for multi-channel operation (compared to the already, really bad
i/o lantecy for single channel operation).

multiple channel operation could be used for loosely-coupled (cluster)
availability (attachment to multiple different systemms), smp
configuration simulation (mainframe smp only supported shared memory,
not shared i/o ... so it required dedicated channels configurated with
controllers that had attached channels at same address ... simulating
smp i/o) and/or throughput load balancing ... pool of disks with
channels that could have concurrent transfer.

i mentioned rewriting i/o supervisor for bullet proof and never fail,
super high-availability ... but also super-short pathlength ... for
quick i/o redrive (minimizing idle time for heavily loaded device
between the end of the previous i/o and the start of the next i/o).

most of standard multi-path i/o for single system with multiple channel
paths to same devices ... had been primary ... and only use alternates
if primary was busy. I tried adding super-sophisticated load balancing
across all available paths ... but that ran into trouble with 3880
problem (extreme added overhead) constantly switching channel
interfaces. It turned out that primary/alternate was much more
efficient, since it tended to minimize the switching of channel
interfaces.

Wylie discernible patterns

just finished "strategy a history" and started reading forward to "On
strategy, a critical analysis of vietnam war" ... it has scenario the
war that US thought it was fighting was different than the war that
N. vietnam thought it was fighting

lots of references to mismatch between vietnam and US strategy and
tactics ... however nothing yet about Boyd's comments in briefings
that top-level pentagon strategy focus was budget size and budget
share ... and vietnam almost incidental other than how it affected
their primary focus. this wanders into perpetual war and Success Of
failure with MBAs and gaming showing immediate success leaves money
on the table (compared to various other scenarios).

from above:
The story of OS/2 is now fading into the past. In today's fast-paced
computing environment, it may not seem particularly relevant. But it
remains a story of how a giant, global mega-corporation tried to take
on a young and feisty upstart and ended up retreating in utter
defeat. Such stories are rare, and because of that rarity they become
more precious. It's important to remember that IBM was not the
underdog. It had the resources, the technology, and the talent to
crush the much smaller Microsoft. What it didn't have was the will.

... snip ...

almost sounds like theme from USA/Vietnam conflict

--
virtualization experience starting Jan1968, online at home since Mar1970

Decades ago there use to be references to wild duck employees at IBM
(and need for same). Recent 100 yr celebration had expunged all
references to wild duck employees and replaced with examples of wild
duck customers

The transition started in the middle 70s after the failure of the
future system effort. In the late 70s there was poster (kind that go
on wall) that wild ducks are tolerated as long as they fly in
formation

... and noted that nothing had been mentioned of periodic Boyd/reform
movement refrain was that top Pentagon attention was on budget size &
share ... and Vietnam was almost incidental except to the extent that
it affected budget size & share.

I thought they were almost going to get to it in the chapter 14 on
"Simplicity" ... when it mentioned pentagon was treating Vietnam
"business as usual" (loc2679) .... but it managed to wander off in
another direction (and not the Eisenhower warning about
MICC). "Simplicity" also spent a lot of time really on "focus"
... "simple" tightly related to keeping focus on objective ... mostly
ignoring other possible benefits.

Wylie discernible patterns

I finished "On Strategy" and I thought they were almost going to get
to the top of Pentagon preoccupied/focused on budget size (and Vietnam
incidental other than how it affected budget) in the chapter 14 on
"Simplicity" ... when it mentioned pentagon was treating Vietnam
"business as usual" (loc2679) .... but it managed to wander off in
another direction (and not the Eisenhower warning about
MICC). "Simplicity" also spent a lot of time really on "focus"
... "simple" tightly related to keeping focus on objective ... mostly
ignoring other possible benefits (possibly simplicity&focus
applied to top of Pentagon and myopic focus on budget size/share).

maybe I was being sarcastic ... in the 60s, I did computer operating
system dynamic adaptive resource management (that was picked and
shipped in IBM product while I was still undergraduate). It included
support for multiple variable optimization/trade-offs

when I joined the science center ... they were doing extensive
monitoring of computer activity 7x24 ... and doing workload profiles,
system activity profiles, hardware profiles ... snapshots every couple
minutes. the monitoring facility then propagated out to lots of other
internal systems ... so eventually had years of data for hundreds of
(different) systems (& workloads). One of the representations was
multi-axis graph depicting workload, system activity, configuration,
etc. One variation was to overlay everything on one graph and got the
maximum value for every possible characteristic. By the mid-70s, this
was also starting to morph into capacity planning.

When I met Boyd ... his graphs characterizing fighter planes was
somewhat similar ... as well as explanation for designing planes
making trade-offs between different characteristics.

--
virtualization experience starting Jan1968, online at home since Mar1970

z/OS is antique WAS: Aging Sysprogs = Aging Farmers

PaulGBoulder@AIM.COM (Paul Gilmartin) writes:
With a brief exposure to MVS, I started to learn CMS. I was shocked
(briefly) to learn that file names might begin with numeric digits; in
fact be entirely numeric. Why not in OS/360 data set names? In an
era of severe storage and CPU cycle constraints, the lexical analyzer
would have been simpler for not needing to treat the first character
specially. Would allowing numeric data set names have introduced a
syntactic ambiguity in JCL or elsewhere? Member names couldn't
unambiguously be numeric because of GDG levels.

I periodically pontificated that the batch heritage systems were for the
convenience of the systems ... while people might prepare the program
... batch characteristic was that the responsible person(s) usually
wasn't around ... and it was important that many things be able to run
w/o the responsible person present.

this is a different paradigm from the online systems ... for instance
linux traces to unix to multics to ctss ... while vm370/cms trace to
cp67/cms to the same ctss ... and is much more oriented to the
convenience of people ... not to the system ... with a person much more
likely to be directly involved with running an application.

the batch system heritage would focus much more on computer resource
optimization than people resource optimization ... this was common
refrain from the 60s up through much of the 80s by POK favorite son
operating system people.

I've mentioned before my wife's father had received a set of Fiske
history books (lectures given during the 1880s) for some distinction at
West Point. One of the points that if it hadn't been for the Scottish
influence from the mid-atlantic states ... the US form of government
would look much more like England's (much less equality, under the
influence of the former British from more northern states).
https://en.wikipedia.org/wiki/John_Fiske_%28philosopher%29

A Little More on the Computer

For a tangent .... there is use of computers to obfuscate bad
behavior. There was claims as the economy was crashing that risk
models had approved all the risky activity. At the same time there was
call for risk managers to be made independent of business people
... because business people had ordered the risk managers to fiddle
the risk model inputs until they got the desired outputs (GIGO,
garbage input, garbage output). Similar HFT is now frequently being
used to obfuscate questionable activity. Recent reference ("The Wall
Street Code: HFT Whisteblower Baim Bodek on Algorithmic Trading")
http://www.garlic.com/~lynn/2013n.html#40

We know some of the people that had been at IBM during the 60s
responsible for the original computer FAA air traffic control
system. In the late 80s, there was several efforts to
modernize/re-engineer the systems (one of several large federal legacy
computer modernization projects that failed in the 80s & 90s). We got
pulled into review of problems with the FAA/ATC re-engineering
efforts. One of the issues was that the people (all newbies) designing
and implementing the application were told that they would not have to
consider/design for problems & errors. The issue was that the
underlying system would be triple replicated and any hardware failures
would be "masked" from the FAA/ATC application (and didn't have to be
considered). However, there are a whole class of problems that exist
at the ATC procedural level ... not involving underlying hardware
issues. One such problem is when there is "hand-off" between FAA/ATC
centers and the receiving controller (human) doesn't realize that they
have a new flight in their space.

... aka there is frequent disconnect between the people
designing/programming the computer application/software and the real
domain experts that have in-depth understanding of the existing
operation/process. Boyd has story on the subject about people that did
the Air Force air-to-air missile used in Vietnam (they claimed 100%
hit, Boyd reviewed and suggested 10% hit or less ... role forward to
Vietnam and Boyd was right)

note some of the large beltway bandits that were teaching new MBAs
business process to redo business operations w/o being domain experts
in the business that they were redoing ... used the same paradigm for
re-engineering large legacy computer systems

co-worker at science center was a (real) Hatfield (some close relatives
directly involved in the events) ... there was also an offspring of one
of the people that discovered DNA ... science center was just part of
4th flr 545 tech sq ... and was mostly around 35 people.
http://www.garlic.com/~lynn/subtopic.html#545tech

I've mentioned before in the late 80s, senior disk engineer opening talk
at annual, world-wide communication group conference with statement that
the communication group was going to be responsible for demise of disk
division ... communication group had corporate strategic "ownership" of
everything crossing datacenter wall, protecting their dumb terminal
paradigm and install base, fiercely fighting off client/server and
distributed computing. disk division was seeing fall in mainframe disk
sales with data fleeing to more distributed computing friendly
platforms. the disk division had come up with number of solutions
... which were being constantly vetoed by the communication group. This
was also factor leading up to IBM going into the red a couple years
later ... and the subsequent re-organization into the 13 "baby blues" in
preparation for breaking up the company (which was reversed when the
board brought in Gerstner).

we knew his senior vp and would get asked to help in work-arounds to the
communication group ... one of which was the original POSIX support in
MVS. There is separate claim about gov. bids requiring POSIXs. other
activity was putting investments into other companies as part of those
companies turning out distributed computing solutions for mainframe (and
we were asked to periodically come in to those companies to assist with
their activity).

we did point out that main motivation behind POSIX was so that customers
could more easily migrate to the lowest cost platform (disk division was
looking to ease port of many of these applications to the mainframe
... which was one of the highest cost platforms).

at very high executive level ... POSIX just appears to make porting easy
.... including the port of non-mainframe applications to mainframe
... helping with the issue with moving distributed computing
applications to the mainframe.

the major market motivation for POSIX was to make it easy to frequently
migrate to whatever the current best price/performance platform that
happen to be at the moment (masking proprietary hardware and operating
system features ... that would lock in customers).

note in this time-frame we had come up with 3-tier architecture and
taking lots of arrows in the back from the communication group. we had
mainframes at top tier ... but (of course) none of the mainframe
hardware attachments were from the communication group.

part of 3-tier and the non-ibm mainframe interfaces ... also included
10mbit ethernet ... and communication group, SAA organization and the
token-ring people were generating all sorts of FUD.

my wife had written 3-tier into response to large gov. RFI that also
happened to have the very highest security requirements. We were also
doing 3-tier customer executive presentations (that the communication
group was trying to shutdown and/or at least discredit). lots of
past posts
http://www.garlic.com/~lynn/subnetwork.html#3tier

a trivial example of the communication orientation ... was the 16mbit
t/r microchannel adapter card. it had been shown that aggregate 10mbit
ethernet LAN throughput was higher than 16mbit t/r as well as having
lower latency.

however, the 16mbit microchannel t/r adapter also had very low per card
throughput ... design was 300+ stations doing terminal emulation all
sharing common bandwidth.

the workstation group had done their own 4mbit t/r card for the PC/RT
(PC/AT 16bit bus). for the rs/6000 with microchannel, the group was told
they couldn't do any of their own cards (communication group at it
again). the problem was that the per card throughput of the standard
16mbit t/r card was (also) less than the pc/rt 4mbit t/r card ... aka a
pc/rt 4mbit t/r server had higher server throughput than rs/6000 server
with 16mbit t/r microchannel card.

hancock4 writes:
The attraction of the IBM PC was that IBM, in tradition with its
previous offerings, sold it as a _solution_ along with application
products to do specific tasks. Basic tasks such as spreadsheet, word
processing, and database were obviously available from IBM, but other
vendors had their own offerings. The IBM also conferred some
legitimacy.

my frequent theme was terminal emulation contributed to big uptake early
on. large corporations were justifying buying tens of thousands of 3270
termianls ... ibm/pc with 3270 emulation was about the same price as
real 3270 ... company could get 3270 terminal in single desktop
footprint with some local compute capability. it was no-brainer to
switch already justified business case for 3270 purchase to ibm/pc (aka
didn't require any additional business justification). It wasn't
acutally necessary to get into the capabilities of the local software as
part of justifying the purchase.

that got million or so requiring little or no incremental business
justification. that is similar but different to the ibm brand name
providing legitimacy.

other PCs tended to actually require their own business case
justification (not getting free pass from 3270 emulation capability)

later as PCs got more powerful ... the (mainframe) communication group
is fighting hard to prevent client/server and distributed computing,
trying to preserve its dumb terminal (emulation) paradigm and install
base.

'Free Unix!': The world-changing proclamation made30yearsagotoday

hancock4 writes:
A big shift in the way mainframe computing was done was circa 1985
when end users would download extracted files from the mainframe and
use a spreadsheet program to analyze the data and generate their own
custom made reports. This right there was a drastic change of
thinking and chink in the wall of the mainframe world. No longer did
end-users need to come, hat in hand, to the mainframe priests and get
in line to ask for a report; they could run themselves at their
convenience.

the ibm disk division noticed this (with drop in mainframe disk sales)
and tried to come up with all sort of ways where the users could keep
their data in the datacenter ... with backup and availability ... and
still access it as easily as if it was on local disk.

however, as I've frequently commented, the communication group had
corporate strategic ownership of everything that crossed the datacenter
walls and were fiercely defending their dumb terminal paradigm and
install base ... fighting off client/server & distributed computing.
They were constantly veto'ing disk divsion solutions ... which was major
factor in senior disk engineer opening talk at annual internal
world-wide communication group conference with the statement that the
communication group was going to be responsible for the demise of the
disk division.

above mentions coming up with 3-tier and (over 25yrs ago) including it
in a response to RFI for large distributed gov. infrastructure with
exceedingly stringent security requirements ... with all the stuff in
the news recently ... possibly things that they've let lapse
... misc. past 3-tier posts
http://www.garlic.com/~lynn/subnetwork.html#3tier

hancock4 writes:
Speaking of address spaces, would anyone recall what model and size
was the largest standard IBM in 1980--the time the PC discussions were
going on? Did they break the 16 meg S/360 addressing barrier yet to
go to XA? (In 1980 the S/370-158 mainframe running our entire
organization had all of 8 meg to it).

As I've periodically noted ... by mid-70s, systems were moving to I/O as
bottleneck and increasing use of storage for caching (to compensate for
slow I/O).

datacenter space was becoming premium and large corporations were
finding with the advent of mid-range 4300s they could put them out in
departmental areas (very low resource footprint). however, in the
datacenter it was also possible to have clusters of 4341s with more
aggregate processing power than 3033, more aggregate channels and I/O
than 3033, more aggregate real storage than 3033, smaller datacenter
resource and footprint than 3033, and lower cost than 3033 (aka
significantly better price performant). misc. past 43xx related email
http://www.garlic.com/~lynn/lhwemail.html#43xx

somewhat to compensate, the 3033 came up with gimmick to configure with
64mbytes ... even with the 16mbyte addressing restriction. the 370 page
table entry was 16bits, 12bit page number (for 12bit/4kbyte pages giving
24bit, 16mbyte addressing), 2 defined bits, and two reserved/unused
bits. The two reserved/unused bits were co-opted to prepend the 12bit
page number ... resulting in a 14bit page number ... 12bit/4kbyte pages
results in addressing 26bit/64mbytes of real storage.

no instruction could generation more than 24bit address ... but running
virtually ... a 24bit virtual address could be translated into a 26bit
real addresss.

there was still all sort of stuff that had to be below the 16mbyte line
... so there was various requirements that would require bringing stuff
from above the 16mbyte line to below the 64mbyte line. The "original"
solution in POK for "bring down" was to write a page (from above the
16mbyte line) out to disk and read it back in (below the 16mbyte line).

I sent them a hack involving dummy virtual address space with pointers
to the original virtual page above the 16mbyte line and its target
location below the 16mbyte line ... and doing a MVCL to copy the page
below the line. old email ref
http://www.garlic.com/~lynn/2006t.html#email800121

there was also major issue with page replacement algorithm treating
the pages above the 16mbyte line differently than the pages below
the 16mbyte line ... resulting in less than optimal page replacement
algorihtm
http://www.garlic.com/~lynn/2007b.html#email860124

hancock4 writes:
Speaking of address spaces, would anyone recall what model and size
was the largest standard IBM in 1980--the time the PC discussions were
going on? Did they break the 16 meg S/360 addressing barrier yet to
go to XA? (In 1980 the S/370-158 mainframe running our entire
organization had all of 8 meg to it).

turns out that MVS had another 24bit addressing problem ... separate
from its real storage bloat and only address 24bit real storage.

the initial move from MVT to virtual memory was OS/VS2 SVS ... single
virtual storage ... or MVT layed out in a single 16mbyte virtual address
space. Not all that different from running MVT in a 16mbyte virtual
machine address space ... except there was a little bit of code cribbed
into the side of MVT to setup its own virtual memory tables and to
handle page faults. The biggest bit of code going from MVT to SVS was
having to translate channel program virtual addresses to real addresses
... i.e. CCW channel programs were built by library routines running as
part of the application and passed to the supervisor via EXCP/SVC0. The
problem was that all the addresses were virtual and the 370 channels ran
with real addresses. Initially, the channel program translation routine,
CCWTRANS was borrowed from (virtual machine) CP67 and patched into
EXCP/SVC0 handling.

The transition from SVS to MVS involved giving every application its own
16mbyte virtual address space. However, the OS/360 heritage was heavily
pointer-passing API based ... so part of the transition was to map an
8mbyte image of the MVS kernel into every application 16mbyte virtual
address space.

Another part was significant amount of OS/360 services were by
"subsystems" that were outside the kernel and in MVS now occupied their
own separate 16mbyte virtual address space. In order to allow
applications to use pointer passing API to subsystems ... a "common
segment" was defined that occupied 1mbyte of every virtual address space
(allowing applications and subsystems to pass pointers back and forth
that addressed the same area). This morphed into CSA (common system
area) with size requirement that was basically proportional to the
number of subsystems and the number of concurrent applications using
those subsystems. By the 3033 time-frame, it was common to have 4-5mbyte
CSAs which were threatening to become 5-6mbyte ... reducing actual
application area in their private 16mbyte address spaces to only
2mbytes.

Sat. recon photo analyst describes warning that Iraq was marshaling
forces to invade Kuwait. White House comes back, discrediting the
analyst, saying that Saddam would do no such thing. White House
finally takes some action when he warns that forces are being
marshaled for invasion of Saudi Arabia (still Team B)
http://www.amazon.com/Long-Strange-Journey-Intelligence-ebook/dp/B004NNV5H2/

Start of the century, presidential records from the 80s are due to be
released to the public under the Presidential Records Act. One of the
first acts of the new president is an executive order that keeps them
from being released. Claims are plans (by members of Team B) for
invasion of Iraq start at the same time, well before 9/11 ... if its
self-delusion by members of Team B ... it spans nearly 40yrs

given the enormous fabrication used to justify the invasion, a
prevention/preemption discussion seems to be simple diversion;
alternative would be "continuous conflict" and perpetual war theme
... which has MICC constantly depreciating diplomacy. Team B
involved in both Iraq invasions were also heavily involved in the
Iran/Iran war ... eventually becoming arms merchants to both sides.
http://www.foreignpolicy.com/articles/2013/08/12/last_war_standing_preemptive

A Little More on the Computer

there were a lot of legacy computer systems done early in the careers
of babyboomers ... which were still around in the 90s ... but weren't
involved in most of the (failed) re-engineering efforts. Around the
first of the century, babyboomers retiring was identified as a major
systemic risk. The babyboomers provided care&feeding for many legacy
computing systems ... and still knew the how&why things were done. The
following generations tended to just use the systems ... but there was
fear that there would be enormous loss of institutional knowledge with
babyboomers retiring.

there was something analogous recently news story about commercial
pilots loosing skills because they weren't getting enough manual
flying time, planes spent so much time on auto-pilot.

Good model also implies that there is first understanding ... which is
a benefit by itself.

Basel has risk-adnusted capital reserves model. Original Basel-2 draft
added qualitative section to the quantitative ... we showed how it
could be done .. basically board and senior bank executives
demonstrate they understood the business. During the review process
(mostly too big to fail) eliminated the new section. We conjectured
that the transparency would expose a lot of fraud is involved

Bloomberg playing in background ... just had Tim Howard (Who To Blame)
... interviewer appeared to try and misdirect point of his book. Note
that there was some use of securitized mortgages during the S&L crisis
to obfuscate fraudulent mortgages. In the late 90s, we were asked to
look at improving the integrity of securitized mortgage supporting
documents (as countermeasure to fraudulent mortgages). However, the
loan origination industry found that they could pay the rating
agencies for triple-A rating (from Oct2008 hearings, where both
sellers and rating agencies knew they weren't worth
triple-A). Triple-A trumps supporting documentation, and with no
supporting documentation there is no longer any documentation
integrity issues (no-documentation, no-down, liar loans). This
momentarily shows up when there was some fiction that TARP funds would
actually be used to buy toxic assets and complaining how hard it was
to accurately value these securitized mortgages (made really hard with
no documentation). There was over $27T done during the bubble and
there was huge motivation for wallstreet to go-along because of the
enormous fees and commissions (possibly $4T-$5T, besides securitized
mortgages designed to fail, selling to their customers, and then CDS
gambling bets they would fail) ... claims that industry tripled in
size (as percent of GDP) during the bubble.

... Greenspan allowed too big to fail to carry the toxic stuff
"off-book" so they aren't included in risk model. End of 2008, just
four largest TBTF were carrying $5.2T off-book which had had market
price of 22cents on the dollar

original purpose of patents in the constitution was to protect
individual inventors from institutions trying to preserve the status
quo. increasingly patent system is being used to protect large
institutions and/or by patent trolls for financial gain ... unrelated
to the original purpose

The original intention of patents in the constitution was to promote
individual inventors and innovation .. protecting them from
institutions trying to preserve status quo. All too often large
institutions are now using patents to protect status quo and even
slow-down innovation ... you see large institutions trying to maximize
profits and their investment ... not trying to maximize
innovation. There are studies of drug companies are slowing down
introduction of new drugs ... because there is still profit to be made
from existing drugs. It is one of the motivations behind extending
life of patents and copyrights .... original intention was that they
only last a decade or two ... as part of promoting continuous
innovation (not maximizing profit)

note frequently part of changing the constitutional narrative from
patents being a license for short, limited period of time (as part of
promoting innovation for the benefit of society) to "intellectual
property rights" ... is trying to get to concept of ownership for
unlimited period as part of maximizing profits

changing the constitutional narrative of short-term license to one of
property ... contributes to the buying/selling of the property
... which then starts down the slippery slope to trolls. There are
also submarine patents ... where patents are filed with extremely
obscure wording that can be interpreted in large number of different
ways. A detailed semantic analysis of large number of patents found
something like 30% of computer "related" patents were being filed
under extremely obscure categories (making it very unlikely they would
show up in normal patent search). They would wait until somebody was
making a profit off something that might possible be interpreted as
coming under the description of the submarine patent ... and then
claim patent infringement (never any intention of producing product
for benefit of society)

Competing forces at work ... reform to try and return to original
intention of constitution ... and special interests making significant
money off current status quo.

the stories about leaked tpp draft was it included provisions that had
been repeatedly defeated in recent legislation AND go way beyond
original provisions in the constition

apparently trolls tend to hit small businesses the hardest ... since
it is frequently cheaper to settle. large firms (with large number of
patents) seem to oppose anti-troll provisions since it may make it
easier to challenge their patents

from above:
CBM expansion opponents include IBM, Microsoft, General Electric,
Adobe and many other firms. They have argued in a letter to lawmakers
that an expansion would discourage investment, and give "infringers a
new procedural loophole to delay enforcement." Companies with large
patent holdings also appear more likely to oppose the CBM expansion.

and there is this: Wall Street's Bad Old Days Could Be Back If the
Banks Win this Lawsuit
http://blog.foreignpolicy.com/posts/2013/12/04/wall_streets_bad_old_days_could_be_back_if_the_banks_win_this_lawsuit

Shmuel (Seymour J.) Metz <spamtrap@library.lspace.org.invalid> writes:
BTW, by the 1990's I was extremely reluctant to look at a dump on
paper. If I'm shooting a problem, I want to be able to use, e.g.,
IPCS, instead of a mares nests of marginal notes and paper clips.

very early in rex(x) life ... well before it had been released to
customers, I wanted to demonstrate that rexx wasn't just another pretty
scripting language. I chose to re-implement (vm370) IPCS (large
application implemented in 370 assembler) ... the objective was to take
less than half-time over 3months, with ten times the function and ten
times the performance (neat trick going from assembler to interpreted
rexx). well under the 3m period ... it was done ... and I started to do
growing library of automated scripts that would examine dump for
anomolies and failure signatures.

later I thought it would be released to customers (after rexx became
available) in place of the existing IPCS. for whatever reason that never
happened even though it was in use by nearly every internal customer
support PSR and every internal datacenter. Eventually I did get approval
to do presentations on the implementation at BAYBUNCH (monthly silicon
valley user group meeting) and SHARE. Within a couple of months after
the presentations, similar implementations from other sources were
becoming available.

I had big issue with VNET throughput ... which relied on the vm370 full
page (4k) spool diagnose interface. problem was that it was synchronous
and could contend with lots of other concurrent requests ... possibly
getting only 4-8 4k blocks/sec (16kbyte/sec to 32kbyte/sec). For HSDT I
needed upwards of 3mbyte/sec (1.5mbit/sec T1 full-duplex link, 3mbit/sec
aggregate or about 300kbyte/sec ... ten such links, 3mbyte/sec).

HSDTSFS was the vm370 spool file function recoded in pascal/vs and moved
to virtual address space. It included support for asynchronous
operation, contiguous allocation, multi-page transfers, read-ahead and
write-behind.

Lon <lon.stowell@comcast.net> writes:
I remember a story at Amdahl that they were not the first choice of
platform for a very wide and all-encompassing Unix license from
AT&T--IBM was but that was during the era when IBM was a tad too in
love with its own ideas and architectures to understand the concepts
behind Unix.

So Amdahl ended up with a very large cookie jar that they never truly
appreciated In My Non Humble Opinion nor took true advantage of.
Don't recall who came up with the first shipping 64 bit SVR4
Unix--Pyramid and SGI were the first I recall since they shared
technology.

some of the local Amdahl people would try and suck me into their
politics. One of the guys that had done HASP ... then had the RASP
effort ... a MFT-based virtual memory system. when that got can'ed
... he left to join Amdahl and recreated it from scratch (there was some
IBM litigation ... but court ordered code review only found very few
lines of similar code). reference to RASP ... reference
http://www.garlic.com/~lynn/2011d.html#73 Multiple Virtual Memory

That project was in internal political competition with GOLD ... I
sugggested why didn't they try the SSUP (stripped down tss/370) approach
and meld the two ... but that never happened (I knew people in
both groups)

IBM had an internal effort to do BSD port to 370 ... but before it
shipped, it was redirected to the PC/RT ... and came out as AOS. The
"official" unix for PC/RT was AIX ... which was an AT&T port that had
been done by the company that had done PC/IX for the IBM PC.

In the same bldg. with the people that started doing the BSD port (first
370 and then redirected to pc/rt) ... there was a IBM group working with
UCLA on its unix work-alike ... LOCUS. That eventually ships on 370 &
386 as aix/370 and aix/368.

A major issue was that field engineering wouldn't support a mainframe
machine that didn't have RAS and EREP. To add that level of mainframe
RAS & EREP for UNIX was a project several times larger than the straight
forward unix port. This is one of the reasons for the TSS/370 SSUP ...
since they got all the device error recovery, RAS, & EREP ... with UNIX
facilities layered on top. It was also why aix/370 mostly ran in vm370
virtual machine (relying vm370 for the device error recovery, RAS &
EREP).

in the early 80s, as "punishment" for various transgressions (like
being blamed for online computer conferencing on the internal network in
the late 70s and early 80s) I was transferred to YKT research and direct
report to an executive (that possibly didn't have any other direct
reports at the time). I was allowed to continue living in San Jose but
had to commute to the east coast a couple times a month.

... we reported directly to executive ... who then moved over to head
up somerset (joint ibm, motorola, apple morphing 801/risc into
power/pc). After SGI buys MIPS, they hire him away to run MIPS. By this
time we had also left IBM.

--
virtualization experience starting Jan1968, online at home since Mar1970

'Free Unix!': The world-changing proclamation made30yearsagotoday

hancock4 writes:
True. But I think in the early days of the PC (early 1980s), a PC
cost more than a 3270-clone. In those days with hardware still so
expensive, people got only the hardware their specific job required.
The early PCs were equipped only with the software and hardware a
specific person needed--not everyone got a modem, 3270-card, various
application software, etc.

a IBM/PC that cost the same as 3270 terminal that had already been
cost-justified then was brain dead business case to switch. if the base
business case was a less expensive 3270-clone then the incremental
business case was still a lot easier than justifying a business case
from scratch (as well as significantly easier compared to justifying
both a 3270-clone *AND* some sort of PC).

separate issue was convinience of a single desktop footprint ... rather
than two-or-more screens, keyboards, etc.

somebody could make a case to "upgrade" their 3270-clone to ibm/pc
... and then use the 3270-clone somewhere else in the company
(justifying the difference between the cost of the 3270-clone and ibm/pc
cost versus the justifying full cost of the ibm/pc).

z/OS is antique WAS: Aging Sysprogs = Aging Farmers

PaulGBoulder@AIM.COM (Paul Gilmartin) writes:
And an alien once asked me, "VM is a version of MVS, isn't it?"

cms had about 64kbytes of code that was the "os" simulator that allowed
"os" compilers and many applications to run unmodified.

the burlington mall vm370 development group was working on a much more
complete coverage of os simulation ... joke about cms 64kbyte os/360
simulation was much more cost effective than mvs 8mbyte os/360
simulation.

head of POK also managed to convince corporate to kill the vm370
product, shutdown burlington mall group, and transfer all the burlington
mall developers to POK or otherwise MVS/XA wouldn't ship on
time. Endicott eventually managed to save the vm370 product mission but
had to reconstitute a development group from scratch.

the shutdown of burlington was going on in extreme secret, not planning
on telling the people until a few weeks before it was effective
... minimizing the number of people that would be able to escape the
move to POK. however, the shutdown managed to leak a few months early
... and numerous people managed to escape ... so many going to work at
DEC on VMS (very early in its development, well before first VMS release
shipped) ... that somebody observed that the head of POK was one of the
biggest contributors to VMS
https://en.wikipedia.org/wiki/VAX

The major expansion of os/360 simulation for cms disappeared in the
shutdown of the burlington mall group ... and the major person
responsible was one of those that went to DEC.

vax/vms sold into much the same mid-range market against vm/4300 ... and
in similar numbers ... for small order sizes (one or few machines). A
big difference was large corporations ordering several hundred vm/4300s
at a time for deployment out in departmental areas. A past post
mentioning explosion in vm/4300 departmental machines
http://www.garlic.com/~lynn/2001m.html#15 departmental servers

'Free Unix!': The world-changing proclamationmade30yearsagotoday

Andrew Swallow <am.swallow@btinternet.com> writes:
8080 and Z80 programs had to be reassembled to work on the X86
family. IBM could have written an emulator for the 8080 that ran on
the 68000. Writing a word processor and spreadsheet were well within
IBM's software ability.

circa 1980 ... there was effort to replace large number of different
internal microprocessors with 801/risc ... including the various
microprocessors used in low-end and mid-range 370s, various of the
controller microprocessors, etc ("Iliad" chips ... which had additions
to help with emulation operations). There was also ROMP (801/risc
Rresearch/Office products division MicroProcessor) that was going to be
used for the displaywriter followon.

for various reasons the Iliad-based efforts were aborted (including one
for as/400 that was going to replace s/38 ... and they quickly did a
CISC microprocessor in its place). The followon to the displaywriter was
canceled (presumably in part because of the rise of PCs) and the group
looked around for something else to use it for and settled on the unix
workstation market. The company that had done AT&T unix port PC/IX for
ibm/pc was hired to do one for ROMP (which becomes PC/RT and AIX).

the displaywriter following 801/risc ROMP had no protection domains and
was designed to run CP.r written in PL.8. They had a couple hundred PL.8
programmers and possibly to give them something to do ... they defined
the project with a hypervisor written in PL.8 and AIX was actually
running in a psuedo virtual machine (not the native hardware). The
project was done based on claim that the port of AT&T unix to the native
hardware would take more effort than the combined hypervisor plus unix
port effort. This was subsequently disproved when the group did the BSD
port to the native hardware (w/o hypervisor) for AOS on PC/RT ... with
much less resources than either the hypervisor or the AIX port.

801/risc Iliad was going to be used for the 4331/4341 followin ... the
4361 & 4381. I contributed to the white paper showing chip technology
had progressed to the point where nearly a full 370 could be implemented
directly in (CISC) circuits natively w/o need for emulation.

with the implosion of the Iliad efforts, several 801/risc engineers
leave and show up in risc efforts at other vendors.

note rochester as/400 group was involved in the power/pc effort ... and
a decade after they abandoned Iliad for a cisc chip ... they switch
as/400 to 64bit power/pc (actually 65bit ... special tag bit).

--
virtualization experience starting Jan1968, online at home since Mar1970

What Makes a Tax System Bizarre?

Alan Bowler <atbowler@thinkage.ca> writes:
I.e. for years they had been "borrowing" from the pension funds (by
underpaying what what was needed), and reporting the money as profit
instead of a loan. Then paying excessive executive bonuses for this
great financial perfomance.

greymausg <maus@mail.com> writes:
I am told that now, if you are really `key' personnel,
working in an US company here, you can arrange to have
your salary paid anywhere. However, I heard recently of
a man who left Ireland about 15 years ago, lots of
expertise, got `headhunted' by facebook, and started to arrange
to move back home, but there was `just one more interview',
(actually 24!)

there are efforts to shutdown many of these provisions ... including
tax-havens ... but there are lots of opposing forces ... these guys
have been doing a lot of reporting on the illegal & legal offshore
tax scams.

recent articles that the current US corporate tax rate is facade
.. that the effective tax rate is near zero with all the
deducations. Proposals to rationalize US corporate tax with much lower
tax rate and elimination of all the loopholes is met with heavy
opposition; by congress because they are paid enormous sums of money
to put in loopholes and by corporations because a combination of a
much lower tax rate with the loophole eliminations would result in a
higher effective corporate tax rate.

we've done some stuff in washington for free. we consulted for free to
the census department for the new computer hardware for the 2000 census
... including being asked to spend all day in the front of room
answering questions in audit done by another agency.

however, we've been told that offering to work on something for free is
one of the most threatening things that you can say in washington ... it
is enough to mark you for life by the beltway bandits.