I've mentioned several times that in the early/mid 90s there were
several industry presentations by the consumer dial-up online banking
operations about their move to the internet ... motivated by the
high-cost of supporting serial-port modems and proprietary networking
(which basically would be offloaded to ISPs). At the same time, the
commercial dial-up online banking/cash-management operations were
saying that they would never move to the internet because of a long
list of security issues

Also, Amazon is advertising that they are provisioning e5-2600 blades
as standard cloud "EC2". It would be interesting to see Amazon &
Google price/performance evaluation for various E5 & E7
configurations.

about 2005, Z9 BIPS and "86" BIPS were compareable, but vendors were
moving to RISC-based cores (translating 86 instructions to RISC
micro-ops for execution, nullifying much of the RISC/86 differences)
... as well as adding more "cores" per chip. There is misconception
possibly associated $500 "86" consumer machines being representative
of I/O capability and throughput of server class machines.

so if two E5 (e5-2600) chips benchmark is over 500BIPS ... what would
four E5 (e5-4600) chips really benchmark and what would eight E7 chips
benchmark???

The reason that we were working with Oracle was that their "open
system" platform was shared with vax/cluster ... so it had cluster
support as part of the native platform that also ran on unix ... once
unix provided cluster support which we had done in HA/CMP.
http://www.garlic.com/~lynn/subtopic.html#hacmp

My wife had previously been con'ed into going to POK to be in charge
of loosely-coupled architecture .... and while there she had created
the peer-coupled shared data architecture. however because of a
combination of low uptake (until sysplex, only ims hot-standby) and
periodic battles with the communication group trying to force her into
using SNA/VTAM for loosely-coupled (aka mainframe for "cluster")
operation ... she didn't stay long.
http://www.garlic.com/~lynn/submain.html#shareddata

I had also worked on supporting various loosely-coupled
scaleup and familiar with mainframe support necessary for RDBMS.

The vendors that had done RDBMS on vax/cluster also had a list of ten
things that vax/cluster had done wrong. In any case, I was able to do
something that emulated the semantics of the vax/cluster API (easing
"open" RDBMS port to unix cluster) that was significantly more
efficient. At the time, the IBM non-mainframe RDBMS was code that was
still under development for OS2 ... it wasn't until much later that it
provided unix support and eventually got around to cluster support. As
previously mentioned ... the mainframe DB2 people complained that if I
was allowed to continue ... it would be a minimum of five years ahead
of where they were. This contributed to the decision to transfer the
effort (announcing it a couple weeks later as the corporate
supercomputer for scientific & numeric intensive only) and told us we
couldn't work on anything with more than four processors).

I had been working with LLNL since before 1988 on cluster scaleup also
being applicable to what they were doing. They had a
distributed/network filesystem on their Cray machine that we ported to
HA/CMP and was working on HA/CMP cluster scaleup ... not only for
128-way parallel oracle, but also 128-way filemanagement as well as
128-way numerical intensive. some also mentioned in this series of
email from 1991 & Jan 1992
http://www.garlic.com/~lynn/lhwemail.html#medusa

possibly within hrs of the last email referenced (discussion of
meeting at LLNL, end of Jan1992), the effort was transferred, we were
told we couldn't work on anything with more than four processor. A
couple weeks later it was announced as supercomputer with 128-way
suppose to be available ye1992 for scientific & numerical
intensive ONLY ... press reference from 17Feb1992
http://www.garlic.com/~lynn/2001n.html#6000clusters1and another press reference from 11May1992
http://www.garlic.com/~lynn/2001n.html#6000clusters2

... again ... "From the Annals of Release No Software Before Its Time"
(20 yrs later)

as mentioned in the Greater IBM referenced post ... we decided to
leave after being told we couldn't work on anything with more than
four processors ... and did some consulting for the person at LLNL
that was heading up commercializing LLNL technology.

And further topic drift regarding the 4341 footnote ... because of
breaching price/performance threashold as well as being able to
deployed outside traditional datacenter enviornment (small footprint,
reduced environment requirements, etc) ... there was enormous
explosion in sales of 4341 (and 4331).

There were several big problems with MVS being able to participate in
this market... the enormous people support requirements for MVS
exceeded available resources as well as support costs dedicated by
this new 4341 mid-range market (including the leading edge of the
distributed computing tsunami).

The other issue was the available disk to be sold into this market was
the 3370 FBA disk. MVS only supported CKD (still only supports CKD)
with only new CKD disk being 3380 (3370 FBA disk for the mid-range
market). This didn't preclude customers replacing existing machines
and continue to use older generation CKD disks ... but didn't have
anything for the rapidly expanding new market.

Eventually there was pressure to come out with the 3375 ... which was
CKD simulation built on real 3370 FBA ... trying to address some of
the market barriers for MVS in the mid-range market. This continues to
be major issue for MVS (and its descendents) ... since there hasn't
been any real CKD disks manufactured for decades. pasts mentioning
CKD, FBA, multi-track searches, et
http://www.garlic.com/~lynn/submain.html#dasd

Joe Pfeiffer <pfeiffer@cs.nmsu.edu> writes:
US Pacific Northwest is another that comes to mind.

grand coulee dam backed up columbia right to the canadian border. when
it was decided to add the 3rd powerhouse ... they also needed to raise
the level of the lake/river behind the dam ... backing the level up into
canada ... requiring a treaty. they also put in reversable pumps in the
irragation part that pumps from the columbia into "grand coulee" (aka
"banks lake"). they can use off-peak generating capacity to pump excess
water into the banks lake and then reverse the process for power
generation during peak load periods.

"J. Clarke" <jclarkeusenet@cox.net> writes:
I've long held that any computing job that can cost lives or vast
amounts of money if it goes wrong should be run in parallel on three
machines with different hardware and different programs all designed to
the same spec. If one disagrees with the others you know you have a
problem but meanwhile you keep operating.

from above:
Benoit Mandelbrot, in his 2004 The Misbehavior of Markets, had pointed
them out with mathematical elegance we could not hope to match
(Mandelbrot had pointed out flaws in the emerging underlying theory as
early as 1962).

Mendelbrot description of period from 60s through the last decade was
continuing to use same computations even when they are repeatedly shown
to be wrong.

some of Mendelbrot's references are similar to this (by nobel
prize winner in economics)

Thinking Fast and Slow
http://www.amazon.com/Thinking-Fast-and-Slow-ebook/dp/B00555X8OA
Since then, my questions about the stock market have hardened into a
larger puzzle: a major industry appears to be built largely on an
illusion of skill. Billions of shares are traded every day, with many
people buying each stock and others selling it to them

... snip ...

which appears to strongly support the enormous amounts of wealth being
accumulated is either blind luck and/or enormous amounts of fraud

from above:
Ever feel like the financial markets are simply a rigged game where
the house (i.e. the world's largest banks) always win? Reading text
messages and emails between traders at Barclays (BCS) about their
often successful attempts to manipulate global benchmarks for interest
rates will only reinforce that belief.

... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

Andy Leighton <andyl@azaal.plus.com> writes:
Yep but the banks aren't using modern machines for their retail
services. I guess some of the systems have roots in to the 90s, maybe
even the 80s. Some of the same problems exist in other industry
sectors. For example a travel company I used to work for announced
the replacement of their reservations system. The system they have
dates back to the late 80s I believe, and was out-dated 15 years
ago.

Amadeus was adopted from the Eastern "System One" reservation system. my wife did
short stint as chief architect for Amadeus ... but she came down on the
side of using x.25 and the SNA forces had her replaced. It didn't do
much good since they went with x.25 anyway.

Houston science center had done an analysis that a future system machine
built from fastest available hardware (i.e. used in 195) running "System
One" ... would have throughput of 370/145 (between 10:1 and 30:1
slowdown) ... analysis putting final nails in the "Future System"
coffin.

--
virtualization experience starting Jan1968, online at home since Mar1970

one of the issues was complete source for cp67 (and later vm370) was
shipped ... customer could rebuild exact system from source. there is
folklore from the early 80s that the agency asked for exact source that
corresponded to everything in any particular running MVS system release.
the corporation formed a task force that eventually spent $5M
investigating the issue before concluding that it wasn't practical
(exact source was never shipped and there were so many different build
processes for all the different components that it would be extremely
difficult to identify the different sources that corresponded to all the
different components that were brought together for any particular
release distribution).

--
virtualization experience starting Jan1968, online at home since Mar1970

Peter Flass <Peter_Flass@Yahoo.com> writes:
I believe CP/67 had the opposite problem. You got isolation and
security, but it was hard to share things. IBM has been putting
facilities for sharing in ever since XA. (GCS, IUCV, Shared
Filesystem, etc.)

IUCV when in with vm370 in the later part of the 70s (before xa). SPM
had been done internally on cp67 by the PISA Science Center and then
migrated to VM370. For whatever reason, the development group ignored it
for the product ... releasing a series of SPM subsets (VMCF, ICUV, etc)

cp67&vm370 were supposedly micro-kernels with absolute minimum function
for providing virtual machines ... with all other functions done in what
was called service virtual machines (now sometimes called virtual
appliances) ... that would contain more complex strategies for things
like sharing, permissions, etc. There were numerous issues over the
decades when people were brought over that had experience with bloated
operating systems ... and assumed that the same approach should be used
... implementing everything in the base kernel rather than moving into
separate virtual address space (strategy that would periodically
accumulate enormous complexity in the base kernel).

with the majority of people trained in bloated kernel paradigm ... it
was constant battle to have things done in virtual address spaces
... rather than the base kernel.

when they finally allowed the original internal network support to ship
to customers (ran in service virtual machine) ... it shipped with
"SPM" support in the source ... even tho vm370 didn't ship "SPM".

the author of REXX wrote a multi-user spacewar game that operated based
on SPM ... service virtual machine running the game with all the
clients in individual user virtual machine ... communicating via "SPM"
... with the internal network support providing "SPM" forwarding
services so that it clients could be either on the same real machine or
different real machines in the network

fully configured z196 with 80 processors (aka "cores") ... with each
processor operating at less than BIP/processor costs $28M. IBM report
has it doing approx. $5B in mainframe hardware/year or the equivalent
of approx. 180 80-processor z196. Ignoring the cloud datacenters are
getting at least 10times the processing-power/core ... a single
mega-datacenter million core/processor z196 complex would translate
into 12,500 80-processor z196s ... or 70yrs of mainframe sales at the
current rate. It would be a minimum of ten times that if looking at
equivalent processing power ... or 700yrs of mainframe sales (at the
current rate) for a single mega-datacenter.

"Charlie Gibbs" <cgibbs@kltpzyxm.invalid> writes:
It must be human nature. Richard Feynman's investigation of the
Challenger disaster revealed the same sort of fudging. As he said
in his final report: "For a successful technology, reality must
take precedence over public relations, for Nature cannot be fooled."
(The photo on the back cover of "What Do _You_ Care What Other
People Think?" is priceless - it shows Feynman sitting in one of
the meetings, and you can tell that his bullshit detectors are
turned up full.)

Analog (sf magazine) ran a parody of somebody in the court convinced the
queen that columbus' ships needed to be built in the mountains where the
trees were, cut into three pieces for transport to the harbor and then
fitted back together.

take-off on influential congressman got the booster rockets built in his
home state ... but they had to be built in sections for transport to
canaveral for final assembly. this was as opposed to competing proposal
for building in location where they could be transported in single
section (but didn't have the equivalent congressional influence).

that aspect was apparently out-of-bounds in the evaluation ... but it is
possibly more similar than shows at first glance ... congressional
meddling was significant factor in both ... some discussed in
this reference post
http://www.garlic.com/~lynn/2012i.html#94 Naked emperors, holy cows and Libor

there has been numerous references to Eisenhower originally intended to
warn about military-industrial-congressional complex (MICC) but dropped
congressional at the last minute. My analogy is the
financial-regulatory-congressional complex (FRCC) where the regulatory
agencies were being stripped of power by congress and under heavy
pressure from congress (and others) to not do anything about the few
remaining regulations. I've drawn the analogy about regulatory agencies
being forced into three-monkeys role (hear no evil, see no evil, speak
no evil).
https://en.wikipedia.org/wiki/Three_wise_monkeys

during johnson's presidency, his wife was on a kick about beatification
... eliminating billboards and other efforts. one was to eliminate the
overhead transmission lines at grand coulee dam from the powerhouses
(generators) to the switchyard (on high area above the dam). The issue
was gravity would slump the wires in any sort of container resulting in
eliminating separation, arching and causing fire. Engineers that
objected were overruled. Then when it happened, the engineers that
objected were blamed.

--
virtualization experience starting Jan1968, online at home since Mar1970

Andy Champ <no.way@nospam.invalid> writes:
Just off the top of my head - Three Gorges Project. Aswan High
Dam. Grand Coulee Dam. Itaipu Dam. That's Eurasia, Africa, North
America and South America, and I suspect all are bigger than anything
in Canada or Scandinavia (though I could be wrong on that)

there was some amount of early complaints about the need for Grand
Coulee dam ... sort of justified on being a work/employment project
in the great depression ... but that sort of was silenced onced
ww2 started and electricity was used for all the aluminum that
boeing need for massive number of planes it turned out ... reference
http://www.rbogash.com/Plant%202/2Plant2.htmlat this website:
http://www.rbogash.com/

there was article yesterday about three gorges coming on full capacity
... 22.5mw ... just short of the aggregate for all the dams on the
columbia river water shed
http://www.chinadaily.com.cn/china/2012-07/05/content_15550486.htm

in the book he spends some amount about fitting patterns
of nile flooding over the last couple millennia. That the person
doing the study ... pg180:
His own calculations showed the Nile could be tamed by a series of
moderate-sized, interdependent reservoirs far upriver from Egypt. But by
the time construction was commissioned in the 1950s, the newly
independent government of Gamal Abdel Nasser preferred a grander
political statement of Egyptian pride, the Aswan High Dam. Still,
Hurst's calculations were needed even for that

... snip ...

My wife remembers going by boat up the three gorges when she was little
girl. Her dad had been in europe during ww2 (command of 1154th
engineering combat group) ... and afterwards was sent to China to be
advisor ("magic") to generalissimo ... and he took his family with him
to nanking (later they were evacuated out of nanking in an army cargo
plane on three hrs notice when the city was ringed, arriving at the
tsing tao airfield after dark, cars&trucks headlights were lined up to
light the field).

I used FBA (512) for all generic physical fixed-block disks now being manufactured

That is independent of issue of remapping channel programs to scsi,
ide, sata, etc ... essentially all are same physical physical
fixed-block formated spinning platters with different electronics for
command interface

from above:
So-called legacy formatting schemes sandwich each 512-byte sector
between Sync/DAM and ECC blocks that handle data address marking and
error correction, respectively -- and also take up space. You
still need those blocks with Advanced Format, but only every 4KB
rather than every 512 bytes, which translates to a dramatic reduction
in overhead. This approach allows Advanced Format to make more
efficient use of a platter's available capacity, and Western Digital
expects it to boost useful storage by 7-11%, depending on the
implementation. Current 500GB/platter products stand to see an
increase in useful capacity of about 10%, which is really quite
impressive.

... snip ...

then electronics layered on top for specific kind of interface.

Even in the days of mainframe 3310 & 3370 ... the mainframe disk
controller would remap the channel program interface into the command
interface supported by the physical disks. However CKD emulation to
FBA isn't just a matter of mainframe disk controller taking channel
program commands and mapping to the physical disk interface ... there
is a whole different physical disk format paradigm in CKD that is
totally different than the physical disk format paradigm in FBA.

I would then claim that what a channel-attached 3830 disk controller
needed to do to convert channel program into 3310/FBA commands versus
what it needed to do to convert channel program into 3370/FBA commands
... would be the equivalent to converting channel program into
scsi/FBA commands.

The distinction back then ... is that you didn't have a large number
of 3370/FBA physical disks connected to non-mainframe channel attached
disk controllers ... so it might be assumed that the disk controller
channel program interface was synonymous with the interface that the
disk controller used to talk to the physical disk (and that the 3310
interface might be identical to the 3370 interface).

I've periodically mentioned that the MVS disk-support/data management
people told me that I even if I gave them fully integrated & tested
MVS FBA support ... I still needed to show a couple hundred million
incremental disk sales to justify $26M cost to cover documentation,
training and education. The subtext was that customers were ordering
3380s as fast as they could be produced ... so MVS FBA support would
just convert CKD to FBA w/o incremental new sales ... and I wasn't
allowed to use the significant life-cycle cost savings in the
business justification.

The original extensions for ECKD were done for the 3880 speed matching
buffer (calypso) allowing 3mbyte/sec 3380 disks to work with older
machines limited to 1.5mbyte/sec channels (158s, 168s, 3031s, 3032s,
3033s) ... which represented an enormous debugging cost. A FBA 3380
wouldn't have had the enormous ECKD speed-matching bugs that calypso
had trying to do CKD 3380.

oh, they periodically asked me to come over and play disk engineer in
bldg14 (disk engineering lab) and bldg15 (disk product test lab)
... last time I check satellite photos, a couple of the bldgs. still
standing on the sanjose plant site. misc. past posts about playing
disk engineer in bldgs 14&15
http://www.garlic.com/~lynn/subtopic.html#disk

--
virtualization experience starting Jan1968, online at home since Mar1970

In the early 70s, Amdahl had seminar in large auditorium at MIT about
forming his new company. Somebody in the audience asked him what
argument was used to convince investors to back his clone processor
company. His reply was that customers had already invested hundreds of
billions in 360/370 software and even if IBM were to totally walk away
from 370, that software investment would keep him in business through
the end of the century. The "walk away from 370" could be considered
vieled reference to IBMs future system effort which was going to
completely replace 370 with something completely different (internally
during FS period, 370 projects were being killed off). misc. past
posts mentioning IBM's FS effort during first half of 70s decade
http://www.garlic.com/~lynn/submain.html#futuresys

when I first wandered into bldg14, they were doing stand-alone,
dedicated, 7x24 scheduled "testcell" testing. At one point they had
tried installing MVS for concurrent testing ... but found in that
environment MVS had 15min MTBF (hang/crash requiring re-ipl). I
offered to rewrite i/o supervisor to provide bullet-proof, never-fail
operation ... enabling on-demand, concurrent, anytime testing
... greatly improving disk development productivity. I then wrote
internal-only document describing what needed to be done, happen to
mentioning MVS's 15min MTBF, and brought done the wrath of the MVS
group on my head. I was told (unofficially) that I would never get any
sort of promotion/award requiring corporate level concurrence since
they would always oppose it. Later in the mid-80s, the love affair
that the MVS group has with CKD extended to forcing the VM/XA to put
out position statement saying CKD was better than FBA ... petty
politics trumping technical integrity

IBM FBA interface from the 70s returned block size and fullword number
of blocks (FBA "Read device characteristics"). 1970 software drivers
would continue to work unchanged with 512-byte size blocks up through
2terabyte disks. Correctly written 1970 drivers, taking into account
the returned block size, would work with 4096-byte size blocks up to
16terabyte disks. This avoided the constant CKD software driver
changes that occurred until they decided (that since it was pure
simulation anyway) ... just settle on standard 3390 geometry (totally
ending fiction that there was even any remotely related relationship
between CKD geometry and real physical geometry).

from above:
For many applications, FBA not only offered simplicity, but an
increase in throughput. GOAL Systems of Columbus, Ohio, discovered
that an FBA emulator written for VM by Bill Jurist delivered an
unexpected boost of speed.

... snip ...

some documentation make distinction between the last IBM "only" made
FBA disk and simulated FBA using SCSI. However, that is relatively
trivial distinction since IBM disk controllers always had to translate
between channel program interface from the channel to what disk
interface expected to see. The primary issue was whether there was a
one-to-one mapping or whether there was an enormous amount of
simulation required like between CKD paradigm and the FBA paradigm.

this upthread reference to early Jan92 meeting regarding 128-way
cluster deployments for DBMS, commercial environment (which got
pre-empted and it then takes nearly another 20yrs, "From Annals of
Release No Software Before Its Time"), also has reference to
9333/Harrier ... which was serial-copper 80mbit interface (10mbyte/sec
concurrent in both directions). We wanted it to move to interoperable
with FCS running at 1/8th & 1/4th ... with dual-link, asynchronous
serial-copper interoperable with FCS at 1gbit.
http://www.garlic.com/~lynn/95.html#13

however, after cluster scaleup was transferred and we were told we
couldn't work on anything with more than four processors, we decide to
leave and instead 9333/harrier evolves into (non-interoperable) SSA
running at 160mbit/sec (20mbyte/sec concurrent in both direction)
serial-copper links
https://en.wikipedia.org/wiki/Serial_Storage_Architecture

it should have started out with interoperable 128mbit/sec (1/8th FCS)
copper-links being able to move up to 256mbit/sec (1/4 FCS)

One of the other things that I was playing with was helping with VLSI
tool group in bldg. 29 ... which started a project to build a (heavily
influenced) "Sowa" implementation ... that supported complex
structures for VLSI design tools. After leaving in the 90s, I did a
re-implementation from scratch and use it for many of the things I
have on our website ... like internet rfc standards index
http://www.garlic.com/~lynn/rfcietff.htmand merged taxonomy and glossaries
http://www.garlic.com/~lynn/index.html#glosnote

When the executive funding unix/posix support on MVS wanted me to help
out ... I had this discussion did he really understand why there was
such a big uptake of unix in the market. The rise of unix/posix was to
free customers from proprietary hardware platforms, allowing them to
change/switch to the better price/performance platform with minimum
effort (w/o being tied to a specific vendors proprietary hardware
... which was allowing proprietary vendors to charge a large premium,
this was somewhat Amdahl's argument for clone processors that I
mentioned upthread; although what allowed clone processors to gain
market foothold was the corporation's sidetrack into the future system
effort).

Much of the cloud activity is very similar to the unix/posix activity
from the 80s ... freeing customers from specific hardware
platform. The rise of cloud, in fact is largely due to the
mega-datacenters able to leverage the optimal, best price/performance
hardware for service deployment. The recent Google statements
regarding entry into cloud services, in competition with Amazon follow
this theme ... with a difference of few dollars ( or even cents) per
BIPS becoming major market differentiation.

Unix was the open platform of the 80s and drastically simplified
customers moving applications between hardware platforms (compared to
60s & 70s). posix standardization efforts were to further extend that
ease. End of the 80s saw the rise of the unix-wars ... supposedly sun
& AT&T were going to lock other vendors out ... with the use of AT&T
owning the original unix source

trying to create unix work-alike that met posix compliance but free of
any at&t source

AS mentioned in the above wikis ... all of these vendors were looking
at higher-end platforms with higher level & more complex capability
... not running all that well on the 386 platforms competing with
ms/dos and windows. a dark horse that was specially created for i86
platforms of the period was linux ... also with unix work-alike and
posix characteristics. unix tended to stay on the higher-end chips
while both linux&i86 were growing together becoming more and more
capable.

The staple in the 90s for the growing internet-based services (some
eventually growing into "cloud" service) tended to be various
risc&unix based platforms ... however the linux-based i86 were
starting to rapidly catch up by the end of the last century (in part
because of competition from clone i86 vendors, bearing some analogy to
370 clone processors) ... with the 80s posix promise of ease of
migration to better price/performance platform actually bearing fruit

While there may be quibbling about current degree of compatibility
between various of these platforms ... the ease of migration between
these platforms has shown to be orders of magnitude better than the
situation in the 60s&70s.

During much of the 90s, we were directly and/or indirectly involved
with many of the parties in silicon valley ... getting to watch and/or
participate in many of the growing pains. One of the scenarios were
scaleup issues starting with attempting to spread ever increasing load
across growing number of backend servers with DNS multiple-A records
... that then transitioned to specialized, dynamic load-balancing code
in the internet facing routers.

Triva ... in the early 80s the ibm san jose disk division had a
project ("DataHub") to create a pclan fileserver for the pc business
market. they had a work-for-hire contract with a small company in
Provo to write some code ... and somebody from San Jose was commuting
nearly every week to Provo as part of the effort. At some point the
IBM company decides to abandon the effort and allows the company in
Provo to retain rights to the development that they were doing under
the work-for-hire contract. Very shortly afterwards ... there is a
pc-based fileserver being offered by a company in Provo.

as an aside, I found his ETO 1154th weekly status reports at NARA and
imaged them. my wife's mother also wrote a large number of letters to
her mother ... which were saved and I've imaged. this was thin airmail
paper with writing on both sides, where even pencil has tended to
"bleed" and embossed creases. I've had to do a lot of post processing.
Her mother had stories about dinners with generalissimo and general's
wife.

repeated extract from one of the 1154th reports:
On 28 Apr we were put in D/S of the 13th Armd and 80th Inf Divs and
G/S Corps Opns. The night of the 28-29 April we cross the DANUBE River
and the next day we set-up our OP in SCHLOSS PUCHHOF (vic PUCHOFF); an
extensive structure remarkable for the depth of its carpets, the
height of its rooms, the profusion of its game, the superiority of its
plumbing and the fact that it had been owned by the original financial
backer of the NAZIS, Fritz Thyssen. Herr Thyssen was not at home.

Forward from the DANUBE the enemy had been very active, and an intact
bridge was never seen except by air reconnaissance. Maintenance of
roads and bypasses went on and 29 April we began constructing 835' of
M-2 Tdwy Br, plus a plank road approach over the ISAR River at
PLATTLING. Construction was completed at 1900 on the 30th. For the
month of April we had suffered no casualties of any kind and Die
Gotterdamerung was falling, the last days of the once mighty
WHERMACHT.

... snip ...

engineering combat groups were guite fluid organizations, typically
fluctuating from 3-6 engineering battalions and floating around
between different commands.

towards the end, he was frequently ranking officer into enemy territory
and had a large collection of officer dangers that were turned over in
surrender ceremonies (nearly all of the ww2 stuff was stolen a few
years ago).

--
virtualization experience starting Jan1968, online at home since Mar1970

Operating System, what is it?

jmfbahciv <See.above@aol.com> writes:
Yup. The PDP-10 OSes distributions evolved into a similar
problem. By 1976 or 77, to install a TOPS-10 distribution
required restoring a dozen (or more) CUSP tapes. We finally
figured out how to make a CUSP tape with the latest versions
of software without having to put them all into field test.

the vm370 group would ship monthly "PLC" tapes to customers, contained
already built system plus the cumulative source in order to recreate the
built system from scratch.

23jun69 unbundling announcement, in response to gov. & other litigation
... included starting to charge for (application) software ... however
they did manage to make the case that operating system/kernel software
should still be free. misc. past posts mentioning unbundling
http://www.garlic.com/~lynn/submain.html#unbundle

in the early 70s, the internal future system effort was started to
completely replace 360/370 ... and significantly different from 360/370.
lots of 370 activities were killed off during that period. I continued
doing 360 & then moving to 370 stuff ... while periodically ridiculing
future system. misc. past future system posts
http://www.garlic.com/~lynn/submain.html#futuresys

when FS finally crashed and burned, there was a mad rush to get stuff
back into the 370 software&hardware product pipelines ... this sparse
period is also credited with giving the clone processors a market
foothold. in any case, the rush contributed to picking up some of
I had been doing and including in vm370 release 3.

The market foothold by clone processors also contributed to changing
decision and starting to charge for kernel software. A bunch of my other
stuff was selected to be packaged as a separate "add-on" ... and the
guinea pig for starting to charge for kernel software (I got to spend a
lot of time with business people about kernel software charging
policies). So there was a several year transition period where kernel
was divided into the old free stuff ... and growing amount of new kernel
"charged-for" stuff (until eventually the switch over to all charged
for).

They wanted me to do monthly PLC release that were simultaneously with
the base PLC release ... but I refused. We negotiated until I was doing
quarterly PLC synchronized with that month's base PLC. Part of the
problem was that I was in the habit of first doing extensive regression
and performance testing before shipping anything (far in excess of
anything the base product group was doing) ... some of this described in
past posts about developing automated benchmarking and testing process:
http://www.garlic.com/~lynn/submain.html#benchmark

If I was required to do ship monthly PLC ... I wouldn't have time to do
anything else ... spending all my time with product support and testing
for customer releases ... and not being able to do any of the more fun
stuff.

--
virtualization experience starting Jan1968, online at home since Mar1970

Monopoly/ Cartons of Punch Cards

blmblm@myrealbox.com <blmblm.myrealbox@gmail.com> writes:
I'm having a little trouble figuring out how you can pay all your
bills with literally one click. I'm genuinely curious, though, rather
than trying to pick a fight .... :

First, I'm guessing you do everything from, oh, your bank's Web site?
My limited experience with paying bills online involves navigating
to each payee's Web site and entering username/password information and
doing a certain amount of pointing and clicking to get to the "pay my
bill" page and confirm that I want to use the default payment method
and so forth. I'm not sure how to speed that up much. (I'm not saying
it can't be done, just that I don't know how to do it.)

I can imagine that if you can pay everything from your bank's Web site
that would be quicker, but "one click" .... There aren't any bills that
you need/want to look at to confirm that they're legit?

talks about mid-90s industry presentations about motivation for online
dail-up consumer banking motivation for moving to internet (offload
customer support of serial-port modems to ISPs) ... and at the same
time, the online dial-up cash-management/commercial banking were saying
they would *NEVER* move because of the huge number of internet security
exposures (however, they eventually moved anyway) ... however, they were
correct about their long list of internet security exposures.

maus <greymausg@mail.com> writes:
During the week, a lady was semt to jail for fiddling money out
of the accounts of one of U2. Something like 3 million. If she had taken
a billion, she probably would be given voluntary retirement.

jmfbahciv <See.above@aol.com> writes:
That was an advantage to leasing :-). Our customers had their own
mods so we couldn't send them a software system which could load.
Hence the bootable tape with the barebones requirements to initiate
a cold start.

I suspect a lot of customer code was not allowed to leave their
computer room so we weren't able to build monitors for them even
if we were insane enough to try to sell that kind of maintenance
service.

PLC tape was both ... full executable and build from source. customer
set was somewhat bimodel ... split between those running straight
"vanilla" and those with system mods (or paranoid agencies that wanted
to verify that what they see is what they get).

At the time, the company had a policy if a "local" (aka branch or
other) released a charge for product (up until then applications), the
people responsible received the first month's lease from all
customers. The month before my resource manager went out, the science
center (given that it was in cambridge, a non-hdqtrs location)
released "VS/Repack" (program that took traces of application
instruction execution and storage references and re-ordered the
program for optimal execution in virtual memory/demand paged
environment) ... and the two primary people responsible got first
months lease.

In the month betweene VS/Repack and my "Resource Manager" ... the
science center was reclassified a hdqtrs location and no longer
eligible for any incentive. Software product pricing was still under
gov. policies ... prices had to at least cover all development and
support costs (divided by number of customers). The next product out
after mine was going to be the "favorite son batch operating system
resource manager" ... which had been done by large numbers of people
and required price of $895/month. Even though my "Resource Manager"
could have gone for a couple tens of dollars a month ... corporate
policy wanted it to be the same price as the favorite son operating
system. Mine quickly went to 1000 customers ($895,000 first month, I
offered to forfeit my salary just to have that $895,000). Also because
the price was set so high (to correspond with the POK favorite son
batch operating system), only high-end systems signed up ... so didn't
get the mass of the market with low & mid-range systems (and other
higher end systems) ... which could have picked a lower price, picked
up the rest of the market and actually brought in more aggregate
money.

The customers had SHARE (user group organization) "waterloo" tape
(managed by university of waterloo) that had user contributed source
changes and application. There was even quite a bit of source code
changes contributed by large customer and SHARE member with
installation code "CAD", share members were assigned 3letter
installation codes ... and were also used in VMSHARE online computer
conferencing ... initially provided by TYMSHARE AUG1976 ... archived
here:
http://vm.marist.edu/~vmshareVMSHARE history also mention waterloo
http://vm.marist.edu/~vmshare/browse?fn=VMSHIST&ft=MEMO

CAD was a government agency ... but they didn't choose the agency's
TLA ... but close ... folklore is that CAD was chosen because it
stands for cloak-and-dagger.

In any case, after starting to charge for kernel software in late 70s
(in response to market foothold by clone processors), the next step in
the 80s, was "object-code-only" (aka no longer shipping source code)
leading to the OCO-wars

Melinda's history goes into other detail ... that the science center
had originally wanted a 360/50 to modify ... adding hardware changes
to support virtual memory ... but all the spare 360/50s were going to
FAA ATC effort. So the science center had to settle for 360/40. There
were some references that making the hardware changes to add virtual
memory to 360/40 turned out to be a lot easier than if they had gotten
a 360/50.

as well as "VM and the VM Community: Past, Present, and Future"
(including the pdf & kindle formatted versions I supplied) ... pgs
27-32 mention cp/40 and the hardware changes for 360/40 to support
virtual memory.

When standard 360/67 with virtual memory hardware support standard,
became available, cp/40 morphed into cp/67.

Trivia original 360 announcement had 360/60, 360/62, and 360/70
... all with 1microsecond access memory. Folklore I heard was that
model numbers were changed with the switch-over from 1microsecond
access memory to 750ns access memory

--
virtualization experience starting Jan1968, online at home since Mar1970

from above:
It is a tribute to the CP/CMS recovery system that we could get 27
crashes in in a single day; recovery was fast and automatic, on the
order of 4-5 minutes. Multics was also crashing quite often at that
time, but each crash took an hour to recover because we salvaged the
entire file system. This unfavorable comparison was one reason that the
Multics team began development of the New Storage System.

... snip ...

The way I remember it was I had added the TTY terminal support to cp67
in the 60s when I was undergraduate at the univ. I played some games
with 1byte field ... since TTY lengths were limited to 80bytes. Multics
was on 5th flr of 545 tech sq, science center was on the 4th flr, and
the science center machine room (with 360/67 running cp67) was on 2nd
flr. USL machine room running cp67 was in the tech sq building across
the quad. My understanding was somebody down at Harvard got a new kind
of ASCII device ... and so somebody (Tom) at USL just patched the
maximum line length to 1200 bytes. This resulted in invalid length
calculations and buffer overruns causing the 27 crashes (& automatic
re-boot).

but more like this recent post about MVS having 15min MTBF (requiring
manual re-boot) when the disk engineering lab attempted to install MVS
for use with testing disks in the process of being developed (I then
offerred to rewrite i/o supervisor to be bullet proof and never fail):
http://www.garlic.com/~lynn/2012j.html#13 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?

--
virtualization experience starting Jan1968, online at home since Mar1970

Simulated PDP-11 Blinkenlight front panel for SimH

Peter Flass <Peter_Flass@Yahoo.com> writes:
IBM was the same. I *think* TCP/IP for VM ("academic computing") came
from outside, and came much later to MVS etc. as a port from VM.

"tcp/ip for vm" (5798-FAL) was done by ibm in vs/pascal ... when the
communication group couldn't block it being announced ... they had some
other tricks up their sleeves. the performance wasn't all that good, it
got about 44kbytes/sec using nearly whole 3090 processor. I did the
rfc1044 changes and in some tuning tests at cray research between cray
and 4341 got channel speed using only modest amount of 4341 processor
(possibly 500 times improvement in bytes moved per instruction
executed). misc. past posts mentioning rfc1044 support
http://www.garlic.com/~lynn/subnetwork.html#1044

later vm370 tcp/ip was ported to mvs by simulating some of the vm370
functions on mvs.

later the communication group hired subcontractor to do tcp/ip support
in vtam. he came back with tcp/ip having much better performance than
lu6.2. the communication group told him that everybody know that a
*correct* implementation of tcp/ip was much slower than lu6.2 ... and
they were only going to pay for a *correct* implementation.

univ. of wisconsin had done wiscnet which was made available as 5798-drg
circa 1984; that was replaced by 5798-fal in april1987.

--
virtualization experience starting Jan1968, online at home since Mar1970

Note that in the 60s&70s the japan auto industry were underselling
cars in the US when the import quotas were put on. In the early 80s
there was article calling for 100% unearned profit on the us auto
industry (washington post?). The scenario was that the quotas was
suppose to reduce competition giving the us industry enormous profits
that they would use to completely remake themselves. However, the
industry was just pocketing the profits and continuing business as
usual.

1990, the us industry had C4 taskforce to look at completely remaking
themselves. they were planning on heavily leveraging technology and so
invited representatives of technology vendors to participate. In the
meetings they could accurately describe the competition and what they
needed to be done to respond (however, with all the stakeholders, they
still continued "business as usual").

One of the issues was with the import quotas, the japan automakers
figured that they could sell that many luxury autos as entry autos
... so they radically changed their auto development process, cutting
the elapsed time in half to develop auto from start to rolling off the
line (compared to industry standard), in order to start shipping
radically different autos.

At the time of the C4 taskforce, US auto industry development was
still on the traditional 7-8 yrs elapsed time ... while the japan auto
makers were in the process of cutting the development elapsed time in
half again. The Japan auto makers were going to be able to adapt to
any change in auto market conditions four times faster than their US
competition.

Offline I would chide the mainframe brethren attendees how they
expected to help since they were on similar development timeframe as
the us auto industry.

--
virtualization experience starting Jan1968, online at home since Mar1970

from above:
But the other scandal is even worse. It involves a more general
practice, starting around 2005 and continuing until -- who knows? it
might still be going on -- to rig the Libor in whatever way necessary
to assure the banks' bets on derivatives would be profitable.

.... snip ...

Note that in the too-big-to-fail money laundering scandal from the
summer of 2010 ... somebody coined the term too-big-to-jail ... that
with everything being done to keep the too-big-to-fail in business,
they weren't going to let a little thing like money laundering for the
drug cartels get in the way.

One of the observations was that at least start of the century, most
of the regulatory agencies had entered 3-monkey mode (see no evil,
hear no evil, speak no evil).

"Dirty clean" versus "clean clean" pretty much sums up Wall Street's
view of cheating. If everybody does it, nobody should be held
accountable if caught. Alas, many United States regulators and
prosecutors seem to have bought into this argument.

Trivia: SVS was basically MVT laid out in virtual address
space. SVS->MVS gave each application and subsystem its own 16mbyte
virtual address space with image of MVS kernel taking up half that
16mbyte. os/360 genre is extensively pointer passing API ... so for
application->subsystem (in different address spaces), common segment
in every virtual address space was created. By 3033, larger
installations were facing the prospect of common segment increasing to
8mbyrte ... leaving nothing in each virtual address space for
application. Somebody retrofits dual-address space to 3033 ... uptake
by subsystems would allow them to access application data w/o needing
common segment ... somewhat reducing pressure on common segment
growth.

The person responsible for 3033 dual-address space was also heavily
involved in various IBM 801/risc activity. He leaves and goes to work
for HP on their risc products (I get internal ibm email asking if I'm
leaving with him). Later he heads the Itanium effort ... is supposedly
the 64bit server followon to i86 (an attempt is also made to recruit
us by the executive responsible for superdome). Itanium has lots of
growing pains and AMD responds with a 64bit i86 implementation also
risc core with i86 instructions remapped to risc micro-ops for actual
execution. Intel has to respond with something similar. Lots of the
business-critical, industrial strength dataprocessing features from
Itanium start migrating to 64bit i86 implementations.

The big meta-datacenters (also housing nearly all cloud services)
start driving a lot of server chip requirements ... each accounting
for hundreds of thousands or even millions of chips. They are already
involved in assembly/manufacturing their own server blades
... claiming they have such scale, they can do it for 1/3rd the cost
of buying brand name blades (their linux use can be considered
analogous to assembling their own blades, they can also build their
own tailored linux). They are pushing blade costs so low that
power&cooling becomes larger & larger percentage of their total cost
of operation. Their on-demand operation will tend to have large number
of blades idle during low-load periods ... but available for on-demand
during peak loads. This is big driver for chip technology to drop
power use (and heat) to near zero when idle ... but be able to
instantaneously ramp up.

At some point in the past there was transition from such operations
looking at capturing mainframe dataprocessing ... to mainframes now
trying to figure out how they can play in such markets;
mega-datacenter, cloud services being major consumer of technology
.... by comparison, traditional corporate commerical dataprocessing
becoming smaller and smaller part of the computing market.

Dan Espen <despen@verizon.net> writes:
That whole PCP vs. MFT vs. MVT thing always struck me as a distinction
necessitated by the huge amounts of memory the OS consumed compared
to the processors of the day.

it didn't stop with os/360 ... MVS with every application given
its own 16mbyte virtual address space was in danger of taking
over the whole address space .... leaving nothing for application.
recent post from this morning in a different discussion
http://www.garlic.com/~lynn/2012j.html#26 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?

From the "Annals of Unintended Consequences" ... the call for 100%
unearned profit tax was not only did import quotas reduce competition
allowing US automakers to rapidly increase their prices leading to
huge profits (that were suppose to go to completely remaking
themselves) ... but the Japan change in product mix from low-end to
high-end ... further reduced any downward price pressure ... and their
move to high-end had additional stimulus for raising price of US
products (resulting in US makers even rolling in more money that never
got to the original intended purpose ... to completely remake
themselves). The combination was claimed that it allowed US makers to
effectively double their price over a short period of time (but
continuing business as usual and no measurable change in product).

The Japanese auto makers at least offered something additional ... and
it was motivated by having the import quota fixed cap placed on the
number of cars. There is no corresponding justification for what went
on in the US auto industry.

There was an issue circa 1990 which they acknowledge was prodding US
automakers to improve reliability. Doubling the price required moving
from 2-3yr loans to 5-6yr loans. US autos now weren't lasting as long
as the loans. Initial response wasn't any particular quality
improvement but extending the warranties to be comparable to duration
of typical loan. Repair costs covered by warranties then was starting
to impact bottom line ... which was then motivation to start
improving quality ... not just that foreign competition had so much
better quality.

US auto industry also restructured business so that auto manufacturing
showed almost no profit and nearly all the profit coming from selling
auto loans. GLBA (bank modernization act) is better known now for
repeal of Glass-Steagall ... but at the time, the rhetoric on the
floor of congress was that the primary purpose of the bill was to
prevent federal bank charters being granted to m'soft and
walmart. However, in the bowels of legislation there was loophole
exempting existing Utah ILCs ... somebody buying an existing Utah ILC
could do banking in all states (w/o requiring a federal bank charter).

You saw the auto companies buying UTAH ILCs ... so they could do their
own auto loans in all 50 states (w/o coming under federal reserve
jurisdiction). However in 2004 when Walmart was going to buy a UTAH
ILC ... so it could become its own acquiring bank (electronic payment
transaction interchange fees are split between the acquiring financial
institution and the issuing financial institution) there was big
publicity campaign rallying community banks against it. The issue was
Walmart accounts for around 30% of transactions in the country and
possibly 10% (or more) of the bottom line of its too-big-to-fail
merchant acquiring institution. Walmart becoming its own acquiring
financial institution would have no effect on community banks (which
makes the publicity campaign dubious) ... but it would have had big
impact on its too-big-to-fail merchant acquiring institution.

Big part of the economic mess was (mostly) unregulated
(non-depository) loan originators (no deposits, no FDIC regulation)
... being able to "buy" triple-A ratings on mortgages packaged as
toxic CDOs. (even when the rating agencies knew they weren't worth
triple-A). this allowed the (mortgage) triple-A rated toxic CDOs to be
sold off to institutions that were restricted to dealing in only
triple-A .... $27T done during the economic mess
Evil Wall Street Exports Boomed With 'Fools' Born to Buy Debt
http://www.bloomberg.com/apps/news?pid=newsarchive&refer=home&sid=a0jln3.CSS6c
Triple-A rating trumps everything else ... so loan originators no
longer had to care about borrowers' qualifications and/or loan quality
("liars loans" ... no-documentation, no down, etc)

Those doing auto loans were also able to ride the coat tails
... forbes article about wallstreet attempting to punish somebody that
brought the issue up early in this century.

from above:
Watsa's only sin was in being a little too early with his prediction
that the era of credit expansion would end badly. This is what he said
in Fairfax's 2003 annual report: "It seems to us that securitization
eliminates the incentive for the originator of [a] loan to be credit
sensitive. Prior to securitization, the dealer would be very concerned
about who was given credit to buy an automobile. With securitization,
the dealer (almost) does not care."

... snip ...

quite a bit more topic drift ... end of 2008, just the four largest
too-big-to-fail were still carrying $5.2T of triple-A toxic CDOs off
balance (each with at least $1T). This was supposedly what TARP was
for ... but with only $700B appropriated ... it wouldn't have made a
dent ... so they had to find other uses for TARP. Earlier in fall of
2008, a few tens of billions had gone for 22cents on the dollar. If
those four (and the others) were forced to bring the off-balance
triple-A rated toxic CDOs onto the balance sheet, they would have been
declared insolvent and force to be liquidated.
Bank's Hidden Junk Menaces $1 Trillion Purge
>http://www.bloomberg.com/apps/news?pid=newsarchive&sid=akv_p6LBNIdw&refer=home

They eventually had to come up with other mechanisms to try and deal
with the enormous amount of off-balance triple-A rated toxic CDOs

Walmart has several hundred stores with (traditional) bank branches
.... (similar to various large grocery stores) ... however, one of the
major reasons for the "unbanked" population is the high margins for
traditional banking ... to fully address the problem would require
significantly reducing the cost of traditional banking. The solution
requires some innovation and would be quite disruptive to existing
large financial institutions.

when I first wandered into bldg14 (dasd engineering development), they
were doing stand-alone, dedicated, 7x24 scheduled "testcell"
testing. At one point they had tried installing MVS for concurrent
testing ... but found in that environment MVS had 15min MTBF
(hang/crash requiring re-ipl) ... even with single testcell. I offered
to rewrite i/o supervisor to provide bullet-proof, never-fail
operation ... enabling on-demand, concurrent, anytime testing
... greatly improving disk development productivity. I then wrote
internal-only document describing what I had done and happen to
mention MVS's 15min MTBF, and brought done the wrath of the MVS group
on my head. I was told (unofficially) that I would never get any sort
of promotion/award requiring corporate level concurrence since they
would always oppose it

part of the scenario was that in wake of FS failure (and sycophancy
and make no waves), many careers became seriously oriented towards
"managing information up the chain" Ferguson & Morris, "Computer Wars:
The Post-IBM World", Time Books, 1993:
... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with sycophancy and make no
waves under Opel and Akers. It's claimed that thereafter, IBM lived in
the shadow of defeat

... snip ...

another quote from the book:
But because of the heavy investment of face by the top management, F/S
took years to kill, although its wrongheadedness was obvious from the
very outset. "For the first time, during F/S, outspoken criticism
became politically dangerous," recalls a former top executive.

Similar to the big cloud services using linux ... because they have
full source and can build/tailor from scratch (in much the same way
they've been building their own blades) ... the online (virtual
machine based) service providers had full source that they could
build/modify cp67.

june-23-1969 unbundling announcement, in response to gov. & other
litigation ... included starting to charge for (application) software
... however they did manage to make the case that operating
system/kernel software should still be free. misc. past posts
mentioning unbundling
http://www.garlic.com/~lynn/submain.html#unbundle

in the early 70s, the internal future system effort was started to
completely replace 360/370 ... and significantly different from
360/370. lots of 370 activities were killed off during that period. I
continued doing 360 & then moving to 370 stuff ... and periodically
ridiculing future system. misc. past future system posts
http://www.garlic.com/~lynn/submain.html#futuresys

when FS finally crashed and burned, there was a mad rush to get stuff
back into the 370 software&hardware product pipelines ... this sparse
period is also credited with giving the clone processors a market
foothold. in any case, the rush contributed to picking up some of
stuff I had been doing and including in vm370 release 3.

The market foothold by clone processors also contributed to changing
decision and starting to charge for kernel software. A bunch of my
other stuff was selected to be packaged as a separate "add-on" ... and
the guinea pig for starting to charge for kernel software (I got to
spend a lot of time with business people about kernel software
charging policies). So there was a several year transition period
where kernel was divided into the old free stuff ... and growing
amount of new kernel "charged-for" stuff (until eventually the switch
over to all charged for).

one of the online service providers was TYMSHARE which provided their
online computer conferencing for free to SHARE as VMSHARE staring in
aug1976 ... archived here
http://vm.marist.edu/~vmshare

In any case, after starting to charge for kernel software in late 70s
(in response to market foothold by clone processors), the next step in
the 80s, was "object-code-only" (aka no longer shipping source code)
leading to the OCO-wars

footnote ... major motivation for future system effort was to provide
such a complex and highly integrated environment it would
significantly raise the barrier to clone controllers (besides being
completely different from 370). As undergraduate in the 60s, I added
tty/ascii support to cp67. As part of that, I tried to get the IBM
terminal controller to do something it couldn't quite do. This was
motivation for the univ. to start a clone controller effort; take
Interdata/3, program it to emulate IBM's controller (but also doing
the stuff I wanted to do), reverse engineer channel interface and
build channel interface board for Interdata/3. Four of us get written
for being responsible for (some part of) clone controller business.

from above:
Introverts are also comfortable with solitude -- a crucial spur to
creativity. When the psychologists Mihaly Csikszentmihalyi and Gregory
Feist studied the lives of the most-creative people across a variety
of fields, they almost always found visionaries who were introverted
enough to spend large chunks of time alone.

mentioning this reference to the disastrous effects that the FS
failures had on the IBM culture, Ferguson & Morris, "Computer Wars:
The Post-IBM World", Time Books, 1993:
... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with sycophancy and make no
waves under Opel and Akers. It's claimed that thereafter, IBM lived in
the shadow of defeat

... snip ...

another quote from the book:
But because of the heavy investment of face by the top management, F/S
took years to kill, although its wrongheadedness was obvious from the
very outset. "For the first time, during F/S, outspoken criticism
became politically dangerous," recalls a former top executive.

... snip ...

I had sponsored Boyd's briefings at IBM, one of his themes was To Be
or To Do
"There are two career paths in front of you, and you have to choose
which path you will follow. One path leads to promotions, titles, and
positions of distinction.... The other path leads to doing things that
are truly significant for the Air Force, but the rewards will quite
often be a kick in the stomach because you may have to cross swords
with the party line on occasion. You can't go down both paths, you
have to choose. Do you want to be a man of distinction or do you want
to do things that really influence the shape of the Air Force? To be
or to do, that is the question." Colonel John R. Boyd, USAF 1927-1997

From the dedication of Boyd Hall, United States Air Force Weapons
School, Nellis Air Force Base, Nevada. 17 September 1999

... snip ...

and most recent item from To Be or To Do blog which has "A quick
checklist that may help you assess the health of your culture and
attitude towards excellence"
http://tobeortodo.com/2012/07/11/questions-to-consider-when-reflecting-on-your-organizations-culture/

--
virtualization experience starting Jan1968, online at home since Mar1970

Is VAX decoding really that bad

Rick Jones <rick.jones2@hp.com> writes:
I am a poor student of DEC History, but believe they were in something
of a decline before being purchased by Compaq, which I believe
pre-dates anything Itanium. To the extent one can accept the source,
there is a Wikipedia article on DEC that includes its so called "final
years"

43xx machines saw similar volumes in small number unit sales ... big
difference between 43xx & vax numbers were large multi-hundred corporate
orders of 43xx ... sort of leading edge of distributed computing
tsunami.

ibm 4361&4381 (follow-on to 4331&4341) were expected to see continued
huge growth in sales ... but it never happened ... by that time, the
mid-range was starting to move to workstations & large PCs (can be seen
in the last couple yrs of vax numbers).

--
virtualization experience starting Jan1968, online at home since Mar1970

.... that is compared to ibm's base price of $1815 for a e5-2600 blade
(nearly 1/10th the price for faster more efficient blade, major cloud
vendors are doing their own manufacturing for possibly 1/3rd the cost
of brand name blades)

As previously mentioned, i86 vendors have converted over to risc for
i86 execution with hardware layer that converts i86 instructions to
risc micro-ops for execution.

Note what really created cloud is the big cloud vendors with their
mega-datacenters with millions of chips&components chosen for optimal
cost/performance ... they assemble their own blades as well as build
their own operating systems from open source (further pushing optimal
total cost of ownership). Effectively everybody else is trying to ride
the coat-tails ... especially those that don't have open hardware
and/or open operating systems, and will even try to obfuscate the
issues.

Google Data Centers 'The Most Efficient In The World'
http://www.informationweek.com/news/internet/security/showArticle.jhtml?articleID=209600041

from above:
Teetzel explained that while all data centers use water for cooling,
Google-designed data centers don't use water for chillers, which are a
kind of air conditioner. Instead, Google uses cooling towers, which
just let the water evaporate without using any power.

... snip ...

note that while there is a generation (or more) between the E5-2600
and its precursor from the time of the POWER7 introduction ... there
is the only POWER listed in the SPECINT ... the are hundreds of i86
benchmarks listed ... but only a single POWER ... so it is the only
thing that can be compared. Possibly whenever a POWER8 might come out
and be benchmarked ... i86 may have also cycled an additional
generation (playing the generation card is only really valid when
there is cherry-picking generations for comparison ... doesn't really
apply if it is the only thing available)

From the cloud vendor standpoint the POWER7 price/BIP, electricity/BIP
and heat/BIP ... would compare unfavorably even with the precursor to
E5-2600.

The Conceptual ATM program

Peter Fairbrother <zenadsl6186@zen.co.uk> writes:
It's the same in the UK for ATM withdrawals - and even for
credit/debit cards it's only the PIN verification part of the process
which is commonly done offline, the actual transaction authorisation
is usually still done online. Account credit is account credit, after
all. :)

the introduction of chip card with PIN verification by the chip ... has
also been claimed to go along with changing burden of proof ... i.e. in
disputes the individual now has to prove they weren't at fault ... as
opposed to financial institution proving that they weren't a fault. In
some scenarios, the financial institution says it can't find the video
surveillance for the individual to prove it wasn't them (as opposed to
the financial institution required to produce the video surveillance
showing that it was the individual).

the chip was doing static data authentication with the terminal ...
which resulted in the rise of the (counterfeit) Yes Card ... i.e.
compromise terminal to harvest static authentication chip data
(effectively the same technology used to harvest static magstripe data)
for creation of counterfeit card. Once the terminal authenticates the
chipcard, the counterfeit card is programmed to answer "YES" to the
three terminal questions 1) is the correct PIN entered ("YES"), 2)
should the transaction be done offline ("YES") and 3) is the transaction
with the card limit ("YES"). Decade old Cartes 2002 trip report,
including mention of presentation on the Yes Cardhttp://web.archive.org/web/20030417083810/http://www.smartcard.co.uk/resources/articles/cartes2002.html

The following year at the "ATM Integrity Taskforce" meetings there was a
LEO presentation on the YES CARD and occurances around the world;
somebody in the audience made the comment that "they've managed to spend
billions of dollars to prove that chips are less secure than
magstripe". The issue is that the harvest of the static authentication
for chip & magstripe is compareable ... however the countermeasure for
counterfeit magstripe card is to deactivate the account number (not
allowing online transaction to go through), which doesn't work with the
counterfeit YES CARD ... since the YES CARD forces the terminal to
do offline transaction.

... more of the testimony by the person that had tried unsuccessfully
for a decade to get SEC to do something about Madoff. note it was
congress that passed the law that said CFTC couldn't regulate
derivatives.

Business news this morning are making sounds like repeat of
Sarbanes-Oxley and Enron/Worldcom .... however, while Sarbanes-Oxley
was suppose to prevent such stuff from ever happening again ... it did
require that the regulatory agencies do something. Apparently even GAO
didn't think that regulatory agencies were doing anything and started
doing reports of public company fraudulent financial filings ... even
showing uptic after SOX (recently seen on internet: "Enron was a dry
run and it worked so well it has become institutionalized"). Enron
involved both SEC and CFTC as well as help from congress and other
parties.

in the time-frame of the cartes2002 presentation on Yes Card there was
a large chip&pin pilot deployment in the US. The YES CARD exploit was
explained to the people doing the pilot ... who we extremely card
centric (and almost solely lost/stolen card focused) ... and they said
that they would change the deployed, valid cards to always go
online. The problem was that there wasn't direct attack on valid cards
... it was attack on terminals to harvest valid card authentication
information (basically same attack used to skim magstripe card
information). Then the counterfeit yes cards were programmed to never
go online ... the supposed countermeasure to have valid cards always go
online ... had no affect on the counterfeit yes cards.

it any case, not long after the yes card exploit became better known,
the large pilot appeared to disappear leaving no trace. the failure of
that large pilot then possibly contributed to resistance to trying it
again in the US until the technology had significantly changed and
proven elsewhere. past posts mentioning yes cardhttp://www.garlic.com/~lynn/subintegrity.html#yescard

there is reference to one of my old postings here .... in return for
helping with some stuff, Los Gatots lab let me have part of a wing and
some labs ... it wasn't until shortly after the 3624 moved ... but
many of the people that had worked on it, were still at the lab.
https://en.wikipedia.org/wiki/IBM_3624

Monopoly/ Cartons of Punch Cards

Walter Banks <walter@bytecraft.com> writes:
One of the trips I made to Japan for 7 weeks I was given a little
conference room to use as an office. One day I came back early
from lunch to find a young lady studying math for the Japanese
equivalent of a GED. I knew her from the tea and cookies she
brought at breaks during the day.

The level of the math she was studying was about second year
university calculus.

one of the other things done was that they started building
manufacturing plants (get aroound quota restrictions) ... a comment from
that period was that they needed to require junior college degree in
order to get workers with highschool education. reference was that even
to get somebody with US highschool level education required US junior
college degree ... because so many US highschools were just handing out
degrees to students that didn't actually receive education.

we've had a.f.c. discussions in the past about states requiring
proficiency tests in order to receive highschool degree ... and then
postponing effective date ... even when the highschool graduation
proficiency tests only involved being able to do 7th grade level math &
reading (worried that requiring 7th grade proficiency for 12th grade
graduation would result in large percentage not getting diploma).

--
virtualization experience starting Jan1968, online at home since Mar1970

from above:
Introverts are also comfortable with solitude -- a crucial spur to
creativity. When the psychologists Mihaly Csikszentmihalyi and Gregory
Feist studied the lives of the most-creative people across a variety
of fields, they almost always found visionaries who were introverted
enough to spend large chunks of time alone.

A corporate review pointed out that I had to add a lot of manual
tuning parameters before it would be approved for release ("since the
state-of-the-art was lots of manual tuning parameters", the POK
favorite son batch operating system had enormous numbers of manual
tuning parameters and there were presentations at SHARE about enormous
numbers of benchmarks comparing random changes in those parameters). I
tried to point out that my undergraduate work from a decade earlier
was dynamic adaptive resource management, eliminating the necessity
for all those manual tuning parameters (falling on deaf ears). So I
added tuning parameters, published the code, formulas and
description. Decades later nobody had got the joke, the dynamic
adaptive code had more degrees of freedom than the manual tuning
parameters ... any manual change could be compensated for by the
dynamic adaptive code.

--
virtualization experience starting Jan1968, online at home since Mar1970

at least one person in congress investigating recent lack of sharing was
major figure from the prior era putting up the walls preventing sharing
(contributing to congress not taking any responsibility ... possibly
one of those laws of unintended consequences)

--
virtualization experience starting Jan1968, online at home since Mar1970

A lesson from history about wasted valor, for which a price might be asked of us

From: lynn@garlic.com (Lynn Wheeler)
Date: 15 July, 2012
Subject: A lesson from history about wasted valor, for which a price might be asked of us
Blog: Boyd Strategy

Boyd side-track:
When he took command in May 1968, much of the 3d Marine Division was
tied down to combat bases, places like Vandegrift and Camp
Carroll. They were part of the "McNamara Line" conceived to shut down
enemy use of the Ho Chi Minh Trail. And according to Davis this simply
wasn't working.

... snip ...

Boyd would mention spook-base/NKP in his organic design for command and
control briefings, pg28: "My use of 'legal eagle' and comptroller at
NKP"

as well as claiming that it wouldn't work, even before taking command.

Coram's mentions Boyd's tour at spook-base ... as well as it being a
"$2.5B windfall" for IBM (over $17B in today's dollars)

from above:
Records indicate that when the program closed, QU-22 'Quackers' never
flew operationally as pilotless drones as initially programmed.
Reliable eyewitness accounts indicate that they were seen flying
pilotless on occasion, but other information indicates that the
Commander at NKP issued a standing order that none were to be flown
operationally without a pilot.

... snip ...

doesn't say whether or not it was Boyd.

--
virtualization experience starting Jan1968, online at home since Mar1970

an attempt was made to get it into the (ibm mainframe) 370
architecture. "owners" of the 370 architecture rebuffed the effort
saying that the 360 "test&set" multiprocessor locking instruction
was more than adequate. If compare&swap were to be added
to 370, had to come up with uses that weren't specific to
multiprocessor operation.

thus was born the examples (still in ibm mainframe principles of
operation) of use of the instruction by multi-threaded applications

scenario operations are one or more storage fetches, operate and the
values and store them back. in multi-threaded applications ... even on
single processor ... interrupt could occur between the fetch and store
and as part of interrupt processing, identical sequence could be
executed by a different thread. when the original interrupted thread
is resumed, values would be stored that didn't reflect what happened
as part of the interrupt processing. something similar could happen if
two threads were running concurrently on different processors. typical
solution was to perform a kernel call where operation would be
performed where interrupts were disabled.

compare&swap performed an atomic operation (not
interruptable) that only did a store operation if compared value had
not changed. simple example is count increment, load a five, add one
... and compare&swap ... which will only store the six if
the current value is still five. another is push/pop list. load the
pointer of the top of the list, load the following pointer from the
first item, and compare&swap ... only store the following
pointer if the top of the list is still pointing to the first pointer.

financial database transactions get somewhat more complex ... since
the data has to come off disk, be updated and written back to disk in
consistent manner ... aka two independent tasks attempting to debit
the same account concurrently.

from above:
Dhrystone tries to represent the result more meaningfully than MIPS
(million instructions per second) because instruction count comparisons
between different instruction sets (e.g. RISC vs. CISC) can confound
simple comparisons. For example, the same high-level task may require
many more instructions on a RISC machine, but might execute faster than
a single CISC instruction. Thus, the Dhrystone score counts only the
number of program iteration completions per second, allowing individual
machines to perform this calculation in a machine-specific way. Another
common representation of the Dhrystone benchmark is the DMIPS (Dhrystone
MIPS) obtained when the Dhrystone score is divided by 1757 (the number
of Dhrystones per second obtained on the VAX 11/780, nominally a 1 MIPS
machine).

scott_j_ford@YAHOO.COM (Scott Ford) writes:
Very true..but still I think Yahoo has a responsibility to their customers

We were tangentially involved in the cal. data breach
notification act (the "original" notification act) having been brought
in to help wordsmith the cal. electronic signature act.

several of the participants were involved in privacy issues and had
done extensive surveys. the #1 issue from the surveys, was identity
theft, primarily the form involving account fraud (fraudulent
financial transactions) primarily as result of data
breaches. There seemed to be little or nothing being done about
the problem and there was some hope that the publicity from the
notifications would motivate countermeasures. The issue was
security measures are usually taken for self-protection, the problem
was that the institutions with the data breaches had little at
risk ... it was their clients/customers that were suffering the fraud
... and so they had no motivation to take corrective action. Since
then the proposed federal legislation has been about evenly divided
between requirements similar to the original cal. bill and those that
eliminates most requirements for notifications (sometimes disguised by
requiring that breach involve multiple different kinds of personal
information that doesn't occur in the real world).

The same organizations were in the process of doing a Cal. "opt-in"
privacy bill (institutions can only share personal information when
authorized by individual). GLBA is better known for repeal of
Glass-Steagall. However the rhetoric on the floor of congress was that
the primary purpose of GLBA was to allow those with bank charters to
keep them, but prevent anybody else from getting bank charters
(eliminate competition). However, another provision in GLBA was
"opt-out" privacy sharing (institutions can share personal information
unless they have record of individual objecting; federal preemption of
state laws). At 2004 annual privacy conference in DC during panel with
FTC commissioners, an individual asked from the floor if the FTC was
going to do anything about "opt-out". They said they were involved with
most of the major financial call-centers and none of the "opt-out" call
lines were equipped to record any information from "opt-out" calls (so
the institutions could claim they could share since there was no record
of objections).

The major motivation for cyberattacks and breaches has been being able
to use stolen account info for fraudulent financial transactions. A
problem is the business process is severely misaligned.

The value of the information to the merchant is profit on the
transaction (possibly couple dollars; for transaction processor possibly
a few cents). The value of the information to the crook is the account
balance and/or credit limit. As a result the attackers may be able to
outspend by a factor of 100 times (what the defenders can afford to
spend on security measures).

The account information is also required in dozens of business processes
at millions of locations on the planet. At the same time the threat of
fraudulent transactions requires that the account information is kept
confidential and never divulged. We've claimed that with the
diametrically opposing requirements, even if the planet was buried under
miles of information hiding encryption, it still wouldn't be able to
stop information leakage.

In the past, the merchants have been told that a large part of the
interchange fee (value subtracted from amount received by merchants)
has been tightly tied to the respective fraud rates ... resulting in
studies that financial infrastructure makes a large profit from
fraudulent transactions ... eliminating any motivation to change the
paradigm and correctly align the business process (to eliminate
fraud). Futhermore, crooks would likely move attacks to the next
lowest hanging part of the financial infrastructure (which doesn't
involve merchants; no justification to charge hefty profit fee
whenever there are fraudulent losses).

--
virtualization experience starting Jan1968, online at home since Mar1970

The dbdebunk revival

paul c <toledobythesea@oohay.ac> writes:
Relational theory is not only widely misunderstood but its application
remains incomplete so it makes sense to me that FP and the other
lights should continue to finish their work. Codd didn't give up just
because the powerful IMS factions tried to sabotage him. It seems most
of FP's critics had similar vested interests.

IMS rivalry was more friendly than that ... I worked with Jim in
system/r days and when he left for tandem ... one of the things he tries
to palm off on me is consulting with IMS group.

It was EAGLE (IMS followon) that was going to be the grand & glorious
... folklore is that with the whole corporation focused on EAGLE
... that it was possible to do the system/r technology transfer from
bldg. 28 to endicott ... to get out SQL/DS.

at the time Congress was working on passing Sarbanes-Oxley, the claim
was that if a CEO signed a financial statement that turned out to be
wrong (including all those that later needing restatement), he would
go to jail. jokes at the time, was that it would just provide extra
business for the audit companies and nothing would change. GAO has
done reports of large numbers of fraudulent financial statements and
nothing has happened to the CEOs (I guess nobody in congress really
expected that SEC would do anything)

TV business news commentator just now said that the associations would
raise the fees charge merchants by amount equal to (or more) what that
they would be paying in the settlement (potentially even profiting
from the settlement, law of unintended consequences?)

The dbdebunk revival

paul c <toledobythesea@oohay.ac> writes:
Among the techies no doubt, but the HW salesmen who made huge
commissions as well as the IMS execs were brutal, not only directly
but behind Codd's back. I've heard the stories from a couple of
people who knew him quite well.

IMS started out on OS/360 batch MVT platform ... which evolves
into the (batch) MVS platform

system/r had been done on the virtual machine vm/370 platform ... and
the MVS organization was involved in all sorts of internal politics
... including repeatedly trying to have the vm/370 product killed off.
misc. past posts mentioning system/r
http://www.garlic.com/~lynn/submain.html#systemr

I've commented that part of the issue was that in the wake of the
"Future System" failure ... the culture of the corporation significantly
changed ... with lots of people operating their careers with managing
information up the chain. Ferguson & Morris, "Computer Wars:
The Post-IBM World", Time Books, 1993:
... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with sycophancy and make no
waves under Opel and Akers. It's claimed that thereafter, IBM lived in
the shadow of defeat

... snip ...

another quote from the book:
But because of the heavy investment of face by the top management, F/S
took years to kill, although its wrongheadedness was obvious from the
very outset. "For the first time, during F/S, outspoken criticism
became politically dangerous," recalls a former top executive.

one of the things they (also) let me do was play disk engineer in the
disk development engineering lab. when I first wandered in, they were
scheduling development disk testing dedicated, "stand-alone", 7x24
around the clock. At one point they had tried installing MVS in order to
do multiple, concurrent testing ... but found MVS had 15min MTBF in that
environment. I offerred to rewrite i/o supervisor to make it bullet
proof and never fail (so they could do on-demand, anytime, concurrent
testing, greatly increasing disk development productivity). I wrote an
internal only paper on what was done and happened to mention MVS with
15min MTBF ... which brought the wrath of the MVS organization down on
my head. misc. past posts mentioning getting to play disk engineer
http://www.garlic.com/~lynn/subtopic.html#disk

--
virtualization experience starting Jan1968, online at home since Mar1970

timothy.sipples@US.IBM.COM (Timothy Sipples) writes:
Yahoo! Mail -- the Web version -- *still* does not use HTTPS for most
communications AFAIK. For example, if you're using a free wi-fi hotspot at
a coffee shop, and you access Yahoo! Mail via their Web interface,
practically everything except your login credentials flows in the clear. A
fairly unsophisticated attacker can intercept that traffic and spoof your
browser -- and access all your e-mail -- for up to 7 calendar days (the
default timeout).

Security professionals have been warning Yahoo! and criticizing them for a
decade. Google Mail and Microsoft Hotmail, among others, don't have the
problem. (Google has always encrypted its Web UI for e-mail.) Yes,
implementing HTTPS costs money. So do security breaches!

In short, don't access Yahoo! Mail over any network that you don't trust --
or, better yet, don't access Yahoo! Mail over the Web at all. Access via
IMAP -- iPhone or iPad, as examples -- using the built-in mail client is
encrypted. Access via the free Zimbra Desktop software is also encrypted,
to pick another example. Or don't use Yahoo! Mail at all.

possibly within hrs of the last email in above (end jan1992,
discussion of meeting at LLNL), the scaleup was transferred, we were
told we couldn't work on anything with more than four processors, and
a couple weeks later there was IBM supercomputer announcement (for
scientific and numeric intensive *ONLY*) ... and we decide to leave.

two of the other people mentioned in that meeting later leave and showup
at a small client/server startup (responsible for something called the
"commerce server"). we get brought in as consultants because they want
to do payment transactions on their server; the startup had also
invented this technology called "SSL" that they want to use.

As part of the effort we need to map the technology into business
process of payment transactions, establish the requirements for SSL use
to meet security assumptions, as well as do audits and walk thoughs of
these new business processes selling merchant domain name SSL digital
certificates (result is now frequently referred to as "electronic
commerce") ... some past posts
http://www.garlic.com/~lynn/subpubkey.html#sslcert

SSL was to meet two objectives 1) is the webserver that you think you
are talking to actually the webserver you are talking to and 2) hide
(aka encrypt) payment/sensitive information being transmitted through
the internet.

For "SSL" to meet #1, the requirements are that the end-user know the
relationship between the webserver they think they are talking to and
the URL they type into the browser; then "SSL" establishes the
relationship between the URL entered and the webserver actually talked
to. Almost immediately the requirements for #1 are violated, commerce
servers find that "SSL" cuts their throughput by 85-95% ... and they
drop back to using "SSL" for just entering payment information (aka #2,
but there no longer is assurance that the webserver that you think you
are talking to is the webserver you are talking to).

You enter a non-SSL URL ... so there is little assurance that you
actually are talking to that webserver. Then sometime later, you click
on payment/checkout button .. which provides the SSL URL (not you) ...
so it violates the first part of #1 requirement, no longer any
understanding between the webserver you think you are talking to and the
URL you enter (the SSL assurance devolves to the webserver you are
talking to is whatever webserver that the webserver claims to be).

--
virtualization experience starting Jan1968, online at home since Mar1970

John_Mattson@EA.EPSON.COM (John Mattson) writes:
Back to basics: My pet peeve(s) (serious security concerns) are:
1) sites which do not allow use of the full set of special characters. My
banks, Google and Facebook do, so it is not that hard. The more
posibilities for each character, the more secure the password.
2) sites which limit length of userid and/or password. That's just plain
dumb.

somebody in POK sent me a copy of Corporate Directive on Passwords late
Friday and I redistributed. Over the weekend, somebody printed on 6670
(ibm copier3 with computer interface) on corporate letterhead paper and
placed it in all the building corporate bulletin boards. Monday morning
numerous people were caught ... even tho the date is clearly sunday and
no "real" corporate directives are dated sunday. Corporate password
rules from long ago and far away
http://www.garlic.com/~lynn/2001d.html#52 OT Re: A beautiful morning in AFM.
http://www.garlic.com/~lynn/2001d.html#53 April Fools Day

static, shared-secrets were somewhat acceptable for
authentication 40yrs ago when a person only had a few. corporate rules
were put in place to create impossible to guess (therefor impossible
to remember) shared-secrets for authentication (with frequent
changes) ... as if it was the only authentication the person has to
deal with.

Transition to Retirement

The culture started eroding with failure of Future System effort
.... Ferguson & Morris, "Computer Wars: The Post-IBM World", Time
Books, 1993:
... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with sycophancy and make no
waves under Opel and Akers. It's claimed that thereafter, IBM lived in
the shadow of defeat

... snip ...

another quote from the book:
But because of the heavy investment of face by the top management, F/S
took years to kill, although its wrongheadedness was obvious from the
very outset. "For the first time, during F/S, outspoken criticism
became politically dangerous," recalls a former top executive.

... snip ...

I was blamed for online computer conferencing on the internal network
during the late 70s and early 80s (folklore is that when the executive
committee was told about online computer conferencing and internal
network, 5of6 wanted to fire me). One of the topics of "Tandem Memos"
was the disastrous effects on all US corporations with the rise of
MBAs and the pursuit quarterly results. From IBM Jargon:
Tandem Memos - n. Something constructive but hard to control; a fresh
of breath air (sic). That's another Tandem Memos. A phrase to worry
middle management. It refers to the computer-based conference (widely
distributed in 1981) in which many technical personnel expressed
dissatisfaction with the tools available to them at that time, and
also constructively criticised the way products were are
developed. The memos are required reading for anyone with a serious
interest in quality products. If you have not seen the memos, try
reading the November 1981 Datamation summary.

... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

I had worked with Jim Gray at research ... and he palmed off some
number of things on me when he left for Tandem. He was behind
formation of TPC (benchmarking). At Berkeley event celebrating Jim
Gray, ... he was credited with formalizing semantics for financial
data transactions (including ACID properties), ... significantly
improving their integrity and increasing the trust that auditors can
have in dataprocessing financial records.

We were tangentially involved in the cal. data breach notification act
(the "original" notification act) having been brought in to help
wordsmith the cal. electronic signature act. misc. past posts mentioning
electronic signature act
http://www.garlic.com/~lynn/subpubkey.html#signature

several of the participants were involved in privacy issues and had
done extensive surveys. the #1 issue from the surveys, was identity
theft, primarily the form involving account fraud (fraudulent
financial transactions) primarily as result of data breaches. There
seemed to be little or nothing being done about the problem and there
was some hope that the publicity from the notifications would motivate
countermeasures. The issue was security measures are usually taken for
self-protection, the problem was that the institutions with the data
breaches had little at risk ... it was their clients/customers that
were suffering the fraud ... and so they had no motivation to take
corrective action. Since then the proposed federal legislation has
been about evenly divided between requirements similar to the original
cal. bill and those that eliminates most requirements for
notifications (sometimes disguised by requiring that breach involve
multiple different kinds of personal information that doesn't occur in
the real world).

The same organizations were in the process of doing a Cal. "opt-in"
privacy bill (institutions can only share personal information when
authorized by individual). GLBA is better known for repeal of
Glass-Steagall. However the rhetoric on the floor of congress was
that the primary purpose of GLBA was to allow those with bank charters
to keep them, but prevent anybody else from getting bank charters
(eliminate competition). However, another provision in GLBA was
"opt-out" privacy sharing (institutions can share personal information
unless they have record of individual objecting; federal preemption of
state laws). At 2004 annual privacy conference in DC during panel with
FTC commissioners, an individual asked from the floor if the FTC was
going to do anything about "opt-out". They said they were involved
with most of the major financial call-centers and none of the
"opt-out" call lines were equipped to record any information from
"opt-out" calls (so the institutions could claim they could share
since there was no record of objections).

The major motivation for cyberattacks and breaches has been being able
to use stolen account info for fraudulent financial transactions. A
problem is the business process is severely misaligned.

The value of the information to the merchant is profit on the
transaction (possibly couple dollars; for transaction processor
possibly a few cents). The value of the information to the crook is
the account balance and/or credit limit. As a result the attackers may
be able to outspend by a factor of 100 times (what the defenders can
afford to spend on security measures).

The account information is also required in dozens of business
processes at millions of locations on the planet. At the same time the
threat of fraudulent transactions requires that the account
information is kept confidential and never divulged. We've claimed
that with the diametrically opposing requirements, even if the planet
was buried under miles of information hiding encryption, it still
wouldn't be able to stop information leakage.

Part of the mis-aligned business process issue is that merchants have
been indoctrinated for decades that big part of interchange fee for
electronic transactions is related to associated fraud ... with major
fraud coming as a result of account information harvested from
previous transactions as a result of data breaches. In fact, it
has been observed that there is significant profit for financial
institutions for the fraud surtax ... which possibly is barrier to
introducing paradigm change that would eliminate the fraud. The other
inhibitor is that any paradigm change that would eliminate majority of
fraud in consumer electronic transactions (as well as eliminating
major motivation for crooks to perform data breaches) ... would
have the crooks switching to the next lowest hanging fruit at
financial institutions (which wouldn't have the opportunity to charge
merchants for the costs+profit, needing instead to be carried solely
by the financial institutions).

zedgarhoover@GMAIL.COM (zMan) writes:
I've heard of folks who've fallen for this. What I can't imagine is the
confluence of someone who I know well enough to blindly send money to AND
think I'd be high enough on their list of folks to email AND wouldn't know
that they were overseas already AND don't have someone I'd call immediately
to ask "Have you heard from Joe?". Who has people in that category?!

Mind you, if someone got hacked through browser spoofing in an Internet
cafe *while overseas*, it would be a lot more plausible. The fact that this
isn't the normal MO suggests that the much-vaunted browser spoofing isn't
nearly as easy as folks make it sound...

95-96 time-period ... there were industry presentations by dial-up
consumer online banking regarding motivation for moving to the internet
(top of the list was enormous consumer support costs for serial-port
dial-up modems ... being able to offload to ISPs). At the same time, the
dial-up commerical/cash-management online banking operations were saying
that they would *NEVER* move to the internet because of a long list of
security vulnerabilities (nearly all of which have since been seen).

the commercial operations eventually started moving to the internet
(anyway ... possibly loss of institutional knowledge in the industry)
and started seeing all the vulnerabilities that had been predicted. some
of this has shown up recently in court cases where business operations
have lost hundreds of thousands or millions from their accounts in such
attacks ... and they are suing the financial institutions for the loss
on the grounds of providing inadequate security.

Altair Star Trek in assembly?

"Mr Emmanuel Roche, France" <roche182@laposte.net> writes:
Obviously, you don't remember it, or did not have a look. My BASIC
program converts the file format, not what is inside the program. Me,
I always start by writing BASIC programs. It is only if I use a lot a
program that I rewrite it in assembly language. As I explained several
times, I was an IBM Mainframe COBOL programmer. When I discovered the
BASIC interpreter under CP/M, it was "love at first sight".

copy of adventure was on stanford's pdp10 ... and apparently somebody
from tymshare moved copied it to their pdp10 machine and then was ported
to their vm370/cms (followon to cp67/cms). i was able to get a copy in
the 70s and then made it available internally within IBM.

folklore is that when CEO of tymshare was told that business customers
were playing games on their vm370/cms systems ... he initially ordered
them all removed ... but then changed his mind when he was told that
1/3rd of tymshare's revenue was coming from business customers playing
games.

I've frequently commented that 95/96 timeframe, online dialup consumer
banking operations were making presentations at industry conferences
on major motivation for moving to the internet (extensive consumer
support costs for serial-port dial-up modems offloaded to ISPs). At
the same time, the online dialup commercial/cash-management banking
operations said that they would *NEVER* more to the internet because
of a long list of security vulnerabilities (which has since shown to
come to happen). More recently the FEDs have recommended businesses
have a dedicated PC that is *NEVER* used for anything other than
online banking (as a partial approx. of the earlier online dialup
cash-management)

During debates in the Sarbanes-Oxley legislation process, the rhetoric
was that it would make sure that all fraud was caught and if CEOs
signed audit reports and financial filings that turned out to be wrong
... the CEOs would go to jail (and nothing like Enron or Worldcom
would ever happen again) ... but required the regulatory agencies to
do something. Possibly even GAO didn't think that the regulatory
agencies were doing anything and started doing reports of public
company fraudulent financial filings, even showing uptic after
SOX. Somewhat gives rise to regulatory agencies as the three monkeys
(see no evil, hear no evil, speak no evil).

Frequent changes and impossible to guess makes static, shared-secret
(passwords, pins, etc) authentication ... impossible to
remember. 40yrs ago with possibly one or very few ... it was barely
possible ... but static shared-secret paradigm doesn't scale. Part of
it is basic security principles requires unique shared-secret for
every unique security domain (as countermeasure to cross-domain
attacks) results in individuals requiring large scores or
hundreds. Rules for password are then created as if they existed in
isolated environment, as if the individual only has to deal with that
single password..

Old post with password rules from 30yrs ago ... it was sent to me (on
the west cost) from somebody on the east coast and I redistributed
it. Somebody then printed the rules on corporate letterhead and placed
in every building corporate bulletin board over the weekend. Monday
morning lots of people coming to work didn't pay any attention to the
date (further give away, it was on sunday that year):
http://www.garlic.com/~lynn/2001d.html#53

actually it is possible to replace registration of
shared-secrets/passwords with registration of public keys ... using
the same exact process. Then authentication becomes digital signature
using the public key that is registered in lieu of password (slightly
more complex than direct compare). Since it is no longer a
shared-secret paradigm, the same public key can be registered
anywhere/everywhere ... and it is no longer required that repositories
of public keys be kept from prying eyes (since the public key can only
be used for verifying digital signature, can't be used for generating
digital signature for authentication). Since the digital signature
isn't static, it is immune to evesdropping and replay attacks.

There have been both RADIUS and KERBEROS implementations of simply
registering public keys ... w/o requiring PKIs, certification
authorities, digital certificates, centralized authorities, etc. I
wrote the RFC changes for kerberos pk-init that would do digital
signature authentication w/o certificates ... basically the identical
business process used for registering password ... doesn't have the
limitations of shared-secret paradigm or the problems with PKIs and
central authorities. Then the PKI industry applied heavy pressure to
have pk-init specification include certificate-based mode of
operation.

We were also brought in to help wordsmith the cal. electronic
signature legislation. The PKI industry was heavily
lobbying/pressuring that the legislation mandate PKI with digital
certificates. They PKI industry had been shopping a $20B/annum
business case around wallstreet (i.e. $100/person/annum digital
certificate) that would be funded by the financial industry. We easily
demonstrated that it wasn't needed.

possibly within hrs of the last email in the above, the cluster
scaleup is transferred and we are told we can't work on anything
involving more than four processors (a couple weeks later it is
announced as ibm's superecomputer) ... prompting us to decide to
leave.

Two of the other people in the Ellison meeting also leave and show up
at a small client/server startup responsible for something called the
"commerce server". We are brought in as consultants because they want
to do payment transactions on the server; the startup had also
invented this technology called "SSL" they want to use; the result is
now frequently called "electronic commerce".

As part of the effort we have to map SSL technology to payment
transactions and do walkthru/audits of these operations manufacturing
"SSL" digital certificates. One of the things we identify is their
systemic risk ... and also that they aren't actually needed in all
but a very tiny number of scenarios ... being able to easily
demonstrate certificate-less-mode of operation.

Somewhat from having done what is now frequently called "electronic
commerce", in the mid-90s we were asked to participate in the X9A10
financial standards working group which had been given the requirement
to preserve the integrity of the financial infrastructure for all
retail payments (aka ALL, brick&morter, point-of-sale, attended,
unattended, internet, face-to-face, debit, credit, ACH, stored-value,
etc).

We did the X9.59 financial transaction standard which slightly tweaks
the paradigm and eliminates crooks being able to use information from
transactions for fraudulent transactions ... there is no longer to
hide transaction details as countermeasure to using the information
for fraudulent financial transactions. x9.59 reference
http://www.garlic.com/~lynn/x959.html#x959

it does nothing to stop data breaches, evesdropping, skimming, etc
... it just eliminates the motivation since the crooks no longer can
use the information for fraudulent transactions.

now the largest use of "SSL" in the world today is this earlier effort
for "electronic transactions" ... which hides financial transactions
details while be transmitted on the internet. With x9.59, it is no
longer necessary to hide the details ... so it also eliminates the
major use of "SSL".

we were also tangentially involved in the cal. state data breach
notification legislation. Many of the parties involved in electronic
signature were also heavily into privacy issues and had done detailed
consumer surveys. The number one issue was "identity theft"
... primarily account fraud/take-over as a result of data
breaches. There was nothing being done about these data breaches and
apparently it was hoped that the publicity from breaches would
motivate changes. A major issue is security measures are taken in self
protection/interest ... the institutions having data breaches had
nothing at risk ... it was the account owners that were at risk.

In the interval since the cal. data breach notification legislation
many other states have passed similar legislation (the
cal. legislation being the first). There also have been dozens of
federal legislation introduced about evenly divided between those that
are similar to cal's legislation and those (federal preemption) that
would effectively eliminate requirement for notification (sometimes
cleverly worded like requiring breached data containing combinations
of personal information that rarely occurs in real life).

--
virtualization experience starting Jan1968, online at home since Mar1970

Trivial with RADIUS is that its administrative account support allows
multiple different supported authentication methods concurrently. The
registration of a public key in lieu of password could be done on an
account by account basis ... w/o requiring wholesale change-over. The
business process becomes identical for such public key operation as
for passwords ... with graceful transition switch-over .... w/o the
enormous GORP associated with PKI ... but addressing the shortcomings
of static, shared-secret (password) authentication.

The problem with shared-secrets/passwords not scaling is a unique
shared-secret is required for every unique security domain. Public key
& digital signature eliminates that scaling problem since the same
public key can be registered in lieu of password ... at every
infrastructure that currently does passwords ... and doesn't require
PKI and/or digital certificates.

PKI and digital certificates are paradigm that structures public key
operation in such a way that lots of money can be charged .... whether
or not it is needed.

PKI & digital certificates are the equivalent of letters of
credit/introduction from sailing ship days for first time interaction
between complete strangers. It was to address situation where the
relying party had no prior knowledge for first-time interaction with a
stranger and no other mechanism for obtaining the information. It was
designed for the days of dial-up email when phone connection was made
to electronic post office, email exchanged, the line hung up, and then
it was necessary to authenticate first-time email from stranger in
totally offline environment.

The failure to get financial industry to underwrite $100/account/annum
digital certificates & PKI was that financial industry already has
prior relationship with the parties they are dealing with ... and/or
have other, much better sources of information ... frequently
real-time (invalidated the assumptions requiring PKI and digital
certificates).

The other part of the problem ... was while digital signature
authentication could go a long way towards addressing fraud in
financial transactions (aka X9.59 standard) ... PKI and digital
certificates are not only redundant and superfluous ... but not only
represent a tremendous (unnecessary payload bloat). Typical digital
certificate payload is 100 times larger than typical payment
transaction payload .... forcing digital certificates to be appended
to every payment transaction increases the transaction payload by a
factor of 100 times (for something that is redundant and
superfluous). misc. past posts mentioning digital certificates
represent (unnecessary, redundant and superfluous) factor of 100 times
payload bloat for payment transactions
http://www.garlic.com/~lynn/subpubkey.html#bloat

The market for PKI and digital certificates is quickly becoming
no-value transactions where the relying party has no other mechanism
for obtaining the information ... and/or the value of the transaction
doesn't justify the cost of higher quality source of information. With
growing ubiquitous, nearly free connectivity, PKI is being forced
further and further into the no-value market segment. However, as PKI
is relegated further and further into the no-value market segment,
there is less and less justification for paying for digital
certificates.

--
virtualization experience starting Jan1968, online at home since Mar1970

loc5019-20:
A rapid coup d'oeil prompt decision, active movements, are as
indispensable as sound judgment; for the general must see, and decide,
and act, all in the same instant.

followed by long discussion of lots of great conquerors started in
their teens; that Napoleon started as officer in his teens as did many
of his generals (and were still quite young) ... most of the
opposition was headed by generals in their 60s-80s .... does mention
that Wellington was same age as Napoleon and studied at the same
military schools in France.

but also has reference to long historical tradition of our MICC,
loc4344-55:
The qualifications of the former were probably limited to their
recollection of some casual visit to two or three of the old European
fortresses; and the latter probably derived all their military science
from some old military book, which, having become useless in Europe,
had found its way into this country, and which they had read without
understanding, and probably without even looking at its date. The
result was what might have been anticipated--a total waste of the
public money. We might illustrate this by numerous examples. A single
one, however, must suffice. About the period of the last war, eight
new forts were constructed for the defence of New York harbor, at an
expense of some two millions of dollars. Six of these were circular,
and the other two were star forts--systems which had been discarded in
Europe for nearly two thousand years! Three of these works are now
entirely abandoned, two others are useless, and large sums of money
have recently been expended on the other three in an attempt to remedy
their faults, and render them susceptible of a good defence. Moreover,
a number of the works which were constructed by our engineers before
that corps was made to feel the influence of the scientific education
introduced through the medium of the Military Academy--we say, a
considerable number of our fortifications, constructed by engineers
who owed their appointment to political influence, are not only wrong
in their plans, but have been made of such wretched materials and
workmanship that they are already crumbling into ruins.

Certain to Win (Chet Richards) ... kindle version no longer seems to
be at amazon loc980-85:
Fingerspitzengefuhl: Intuitive Skill Literally a fingertip feeling or
sensation, it is usually translated as "intuitive skill or knowledge."
It provides its owner an uncanny insight into confusing and chaotic
situations and is often described as the "ability to feel the battle."
During the North African campaign, the British ascribed this seemingly
mystical quality to Rommel because he always seemed to know what the
British were going to do.

... snip ...

this includes intuitive insight/knowing w/o even laying eyes on it
... which has large amount of overlap with coup d'oeil. As I mentioned
upthread coup d'oeil is visial metaphor along the lines of OODA
... and the cited book from 1846 has coup d'oeil with significant part
of OODA having: see (aka observe), decide and act.

My amazon order history shows "Certain to win" paperback from 2005 and
kindle version from 2010 ... at the moment all the order histories
show URL for corresponding items ... except there is no longer URL for
"certain to win" kindle version. no explanation.

We did some work with somebody that was extreme audio ... in meetings,
he would stand in corner facing the wall to block out visual
inputs. He also happened to be candidate for MacArthur award ... but
was told he was disqualified when they found he had worked on
DARPA-funded project (claim was strong bias against certain kinds of
DARPA activities).

--
virtualization experience starting Jan1968, online at home since Mar1970

the scaling problem with passwords is that every environment assumes
that it is the only environment requiring a person to memorize an
impossible to guess something you know, shared-secret. human factors
shows that it does work past a very few ... current reality is that
people are potentially faced with trying to deal with large scores or
even more than a hundred.

there is some theory that pci/dss didn't appear until after the
cal. data breach notification act as part of industry effort to show
that the industry was taking corrective action and therefor the data
breach notification act was no longer required (i.e. motivation for
the data breach notification act was in large part it appeared as if
nothing was being done). However there have been quite a large number
of data breaches even with PCI/DSS.

Part of the issue is that the business process in the current paradigm is quite mis-aligned

• The value of the information to the merchant is profit on the
transaction (possibly couple dollars; for transaction processor
possibly a few cents). The value of the information to the crook is
the account balance and/or credit limit. As a result the attackers may
be able to outspend by a factor of 100 times (what the defenders can
afford to spend on security measures).

• The account information is also required in dozens of business
processes at millions of locations on the planet. At the same time the
threat of fraudulent transactions requires that the account
information is kept confidential and never divulged. We've claimed
that with the diametrically opposing requirements, even if the planet
was buried under miles of information hiding encryption, it still
wouldn't be able to stop information leakage.

Ibmekon writes:
It confirms my suspicion that governments will undermine any attempt
of citizens to save money for their old age. This is done by calling
it a pension, insurance policy etc - and applying legal sanctions to
its usage.
These prevent early encashment.
Normal saving is discouraged by "retention taxes", capital gains,
inflation etc etc etc

wallstreet is behind many of the efforts to loot all major piles of
money. in the S&L crisis it was wallstreet investment bankers wanted the
S&L reserves significantly reduced ... they then swooped into the S&Ls
with junk bonds as a place to stow the money ... with most of it just
evaporating into wallstreet crevices (most of the people behind the
activity were never touched).

401Ks & IRAs were similar activity ... as well as pushes to privatize
SS. one of the issues in looting the large pensions funds was they were
restricted to "safe" investments. they then found they could make toxic
CDOs appear "safe" by paying the rating agencies for triple-A
... opening up access to all the piles of money restricted to only
dealing in triple-A.

there is whole bunch about propping up the too-big-to-fail and not
holding them accountable ... but very little about how much of the
triple-A rated toxic CDOs are still being held by large pension funds (a
primary point in dealing the triple-A was so that it opened up the large
pension funds for the toxic CDOs).

the original TARP funds was to "buy" up toxic assets ... but with $27T
done during the period
Evil Wall Street Exports Boomed With 'Fools' Born to Buy Debt
http://www.bloomberg.com/apps/news?pid=newsarchive&refer=home&sid=a0jln3.CSS6c

... the $700B appropriated for TARP would have hardly made a ripple. End
of 2008, estimate the just four largest too-big-to-fail were still
carrying $5.2T "off-balance" ... earlier that fall, several tens of
billions had gone for 22cents on the dollar, if the TBTF had been
required to bring back on balance, they would have been declared
insolvent and forced to liquidate
Bank's Hidden Junk Menaces $1 Trillion Purge
http://www.bloomberg.com/apps/news?pid=newsarchive&sid=akv_p6LBNIdw&refer=home

directed at institutions with bank charters ... including giving bank
charters to some of the wallstreet institutions that didn't have
them. But little mention of the large pension funds ... which were
primary targets of the whole triple-A rating scam.

one of the issues was with all the propping up the too-big-to-fail and
little addressing underlying problem ... is that it remains huge boat
anchor dragging down any recovery.

Bernanke testimony yesterday (and continues today) was he would try and
do something ... but from last summer

Geithner, Bernanke have little in arsenal to fight new crisis
http://www.washingtonpost.com/business/economy/geithner-bernanke-have-little-in-arsenal-to-fight-new-crisis/2011/08/12/gIQAFuFvFJ_story.html

--
virtualization experience starting Jan1968, online at home since Mar1970

some of this has been playing in mainframe discussions about role that
z196 can play in cloud computing. one of the big issues is that the
large public cloud pioneers heavily leverage open hardware and software
... in part being able to minimize costs ... but also assembling their
own blades (claiming 1/3rd the cost of price of brand name blades) and
building their own operating systems ... allowed them to be tailored
specifically for task at hand.

Multi-factor authentication is assumed to be more secure if the
different factors have different vulnerabilities and exploits (PIN
frequently be associated with countermeasure to lost/stolen
card). misc. past posts mentioning 3-factor authentication paradigm
http://www.garlic.com/~lynn/subintegrity.html#3factor

However, end-point compromise (and/or end-point communication
compromise) have represented a common vulnerability for at least a
couple decades. Compromised end-point harvests both card static data
and PIN ... allowing creation of counterfeit card with PIN. The
compromised end-point represents a common vulnerability ... negating
the increased security assumption about independent
vulnerabilities/exploits (such end-point compromises date back at
least two decades)

A compromised end-point harvested the static authentication data
necessary to create a counterfeit YES CARD. The nature of the
operation turns out to negate even needing to harvest the associated
PIN. After a terminal authenticates the (counterfeit YES CARD) card,
the card always answers "YES" to the following questions 1) was the
correct PIN entered, 2) do an offline transaction, 3) is the
transaction within the account credit limit.

The "YES" answer to #1 negates even needing to know the correct
pin. The "YES" answers to #2&#3 negates invalidating the account as
method of stopping the fraud. YES CARDS presentation at the ATM
Integrity Task Force meetings prompted somebody in the audience to
loudly comment that they managed to spend billions of dollars to prove
chips are less secure than magstripe (because invalidating account is
not effective against a YES CARD). misc. past posts mentioning
YES CARD:
http://www.garlic.com/~lynn/subintegrity.html#yescard

--
virtualization experience starting Jan1968, online at home since Mar1970

Shared-secrets/password (something you know) authentication appears
to be fine when taking myopic institutional-centric ... the magnitude
of the problem and poor-scaling becomes apparent with person-centric
viewpoint when individual is required to deal with hundreds of
institutions all requiring their own unique, shared-secret/password
authentication.

An issue with getting institutions to correct the problem ... is that
it isn't an issue with any specific institution ... it spans all the
institutions depending on password-based authentication. It is
analogous to the upthread mention of data breach notification
legislation ... the institutions weren't at risk from the breaches and
therefor weren't doing anything ... it was the individuals that were
at risk ... and there was some hope that the publicity from the data
breach notifications would prompt institutional action.

In the wake of the original data breach notification legislation
... there was some corrective action with PCI-DSS ... but it 1) seemed
to largely be motivated as an execuse to eliminate the notification
requirement and 2) seemed to have little practical effect on breaches
(breaches occurring at PCI-DSS certified institutions ... which would
then have the PCI-DSS certification revoked ... not because they
didn't meet PCI-DSS certification ... but because they had a breach).

Middle 90s, there were several presentation at financial industry
conferences about motivation for dialup, online banking operations
moving to internet (major motivation was huge consumer support costs
related to serial-port modems being offloaded to ISPs). At the same
time, the dialup, online commercial banking/cash-management
operations were claiming that they would never move to the internet
... for a long list of security issues ... including a numerous kinds
of end-point compromises (all that have since come to pass).

End of the 90s, lots of operations were working on something you
have, card-based authentication for the personal computing market,
company in redmond having card-based groups, EU had countermeasure
standard for compromised PC/end-point, etc. Early part of the century,
a major payment card operation added chip to their consumer payment
card and offerred "free" cardreader to their customers. The ensuing
customer support disaster resulted in rapidly spreading rumor
throughout the industry that smartcard based authentication wasn't
practical in the consumer market ... and nearly all smartcard based
solutions being abandoned (including redmond company dissolving all
their smartcard groups).

My wife and I did a joint after action review with some members at the
redmond company ... and it turned out the enormous customer support
disaster was not related to smartcards but to the serial-port
smartcard reader that was being used for the free give-away (some
conjecture that they got fire-sale on serial-port readers exactly
because they were obsolete). The industry institutional knowledge
about enormous serial-port customer support costs apparently
evaporated in the few years between the mid-90s and the late-90s
(serial-port related customer support problems were also major
motivation for development of USB). In any case, it wasn't possible to
turn around the rapidly spreading and pervasive opinion in the
industry, it really being a serial-port issue and not a smartcard
issue (and turn-around all the programs being abandoned, including the
EU countermeasure standard for compromised PC/end-point).

part of the hang-over is from the 90s when large institutions spent
billions on re-engineering from legacy mainframe to large numbers of
killer-micros.

in the 60s & 70s ... lots of legacy institutions added online/real-time
front ends to their batch settlement operations ... however they still
completed the operations in overnight processing.

in the 90s, the increasing workload and globalization .... was putting
pressure on shortening the overnight batch window as well as increasing
the processing needed to be done. To address the problem, the billions
were spent on re-engineering for straight-through processing ... with
the increase in workload offset (i.e. overnight batch processing portion
being combined with the real-time operations) by implementation on large
number of killer micros.

the problem was that the pilot workload/demonstrations was done with
technology that didn't actually do parallel scaling well ... along with
having on the order of 100 times the overhead of compareable legacy
batch Cobol (totally swamping the anticipated throughput improvements
with large number of killer micros). The resulting monumental failures
resulted in huge risk adversion in the culture and settling back to the
legacy mainframe implementations.

A couple years ago, I participated in taking some newer tachnology to an
financial industry association that addressed all the problems from the
90s ... reducing straight-through processing performance to only 3-5
times that of overnight batch cobol, instead of 100 times. Simulated
transactions on modest sized cluster showed ability to handle the
equivalent of full day of (straight-through processing) transactions for
a very large institution in approx. an hour of elapsed time (easily
handling daily peak-load with plenty of capacity to spare).

Initially the (mostly technical) members of the financial association
showed a great deal of interest ... but then all activity was
halted. One of the offline comments was that among the business people
at the member institutions ... the scars from the failures of the 90s
were still too fresh ... and it was going to take quite a bit more time
before shifting away from the risk adverse culture that had resulted
(from those failures).

note however, lots of other stuff did (successfully) migrate off legacy
mainframes contributing to company going into the red in the early 90s
... resulting in the Gerstner period that resurrected the company ...
directing it away from hardware products and more into services. Recent
numbers are that 83% of revenue comes from services&software ... and 17%
accounting for everything else ... including all hardware. Hardware
revenue has been pegged approx. equally divided $5B for i86 hardware,
$5B for power/risc hardware, and $5B for legacy mainframe hardware.

above says Z mainframe down 24% in revenue ... which would correspond
to dropping below $4B ... which would translate into equivalent of
approx. 135 z196 (@$28M each).

from the article:
IBM has tried to position mainframes as cloud servers. There's just one
problem with that approach: It's so much spin that I'm surprised company
executives don't lean when they stand.

... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

mward@SSFCU.ORG (Ward, Mike S) writes:
This is one area where I really have a problem. It used to be back in
the 370 days that if a machine was rated at 50 mips and you moved up
to 100 mips you really noticed the difference in execution time. Today
if you have a 100 mip machine (I know they're rated at msu's not mips)
and you moved up to a dual with 160 mips you might be cutting your own
throat. They may give you 2 processors each rated at 80 mips for a
total of 160 mips. If your workload is such that it can't take
advantage of dual processors then you have just dropped down to an 80
mip machine when you used to have a 100 mip machine. I know I'm on a
rant, but it happened to up and we were being pressured by the vendor
to go to the dual processor and that we would be very happy. We
weren't. (end of rant)

370s & for a few generations ... going from uniprocessor to
dual-processor started off by slowing machine cycle of each processor
down by 10% ... bascially allowing caches a little headroom to handle
cross-cache invalidations from the other cache (store through processor
caches, every store operation would also involve sending invalidation
signal to the other cache for that cache line). So basic two-processor
hardware ran at 1.8 times a single processor. Then operating system
multiprocessor overhead would increase (back when single processor MVS
"capture ratio" could be 50%) ... leaving even less cycles for
application execution ... aka same exact 10mip uniprocessor would only
start out only being 9mip processor in two processor mode. Note that
actual handling of cross-cache invalidation was over&above the 10%
processor cycle slowdown (in real live operation, 10mip process running
at 9mips ... would actually effectively have less than 9mips, further
reduced by multiprocessor operating system overhead & cache overhead of
handling cross-cache invalidation signals).

strategy with 3081 was to never again to offer single process at the
high-end. this ran into a couple problems ... clone processor vendors
were offering uniprocessor and ACP/TPF didn't have multiprocessor
support. All sorts of unnatural acts were done to try an make a 3081
acceptable to ACP/TPF (and head off customer base all moving to clone
processors). this is besides the issues outline here about comparison
between 3081 and clone processors:
http://www.jfsowa.com/computer/memo125.htm

eventually there was 3083 (in large part for the ACP/TPF market) which
was created by removing a processor from a 3081 (which is not as simple
as you might think, processor 0 was at the top of the frame, so
processor 1 in the middle of the frame would be the one removed ... but
that made the frame dangerously top-heavy). Being only single processor,
turning off the cross-cache 10% slowdown made the processor nearly 15%
faster (than a processor in 3081).

combining two 3081s together for a four-process 3084 was big challenge
... singe it met that each processor cache would be getting cross-cache
invalidation signals from three other caches (not just one). kernel
storage use became significant ... so operating systems running on 3084
were cache-line sensitised ... all kernel storage was changed to align
on cache-line boundaries and be multiples of cache-lines. The problem
was that if the start of end of one storage location was at the start of
a cache-line and the start of a different storage location was at the
end of the same cache-line ... the two different storage locations could
be in use by different processors simultaneously. However, it represents
only a single storage block for cache management ... and could result in
cache "thrashing". The storage cache sensitivity change is claimed to
improve 3084 throughput by 5-6% (minimizing cache line thrashing).

However, higher-end 370s processor throughput was quite sensitive to
cache hit ratios ... which would be seriously affected by high-rate of
asynchronous i/o interrupts. For my "resource manager" ... I did some
hacks (at high i/o rates) turning off enabling for I/O interrupts for
periods of time and then draining all pending I/O interrupts. I could
demonstrate aggregate higher throughput (even I/O throughput) ... since
the batching of I/O interrupts would have much higher processor
throughput (because of better cache hit ratio) ... offsetting any delay
in taking the interrupt (note part of 370/xa was attempting to address
same issue with various kinds of i/o queuing in the hardware).

When I first did two-processor 370 support ... I was able to deploy in
production environemnt ... two processors running more than twice MIP
rate as single processor ... including processor cycle only running at
.9 that of single processor. Some games with cache affinity allowed
improved cache hit ratio ... which more than offset the 10% slowdown in
processor cycle.

--
virtualization experience starting Jan1968, online at home since Mar1970

Part of the issue is also that the business process in the current
paradigm is quite mis-aligned

• The account information is required in dozens of business processes
at millions of locations on the planet. At the same time the threat of
fraudulent transactions requires that the account information is kept
confidential and never divulged. We've claimed that with the
diametrically opposing requirements, even if the planet was buried
under miles of information hiding encryption, it still wouldn't be
able to stop information leakage.

...

Separating the information needed for frequent business operations and
the information needed for authentication would help the situation
... however that needs to go hand-in-hand with countermeasures for
compromised end-points/PCs. In the late 90s, there was an EU standard
that (coupled with stronger authentication) would address the
compromised end-points/PCs issue.

There were quite a number of token based efforts that were developed
in the late 90s. Early part of the century, a major payment card
operation added chip to their consumer payment card and offerred
"free" cardreader to their customers. The ensuing customer support
disaster resulted in rapidly spreading rumor throughout the industry
that smartcard based authentication wasn't practical in the consumer
market ... and nearly all smartcard based solutions being abandoned
(including redmond company dissolving all their smartcard groups).

My wife and I did a joint after action review with some members at the
redmond company ... and it turned out the enormous customer support
disaster was not related to smartcards but to the serial-port
smartcard reader that was being used for the free give-away (some
conjecture that they got fire-sale on serial-port readers exactly
because they were obsolete). The industry institutional knowledge
about enormous serial-port customer support costs apparently
evaporated in the few years between the mid-90s and the late-90s
(serial-port related customer support problems were also major
motivation for development of USB). In any case, it wasn't possible to
turn around the rapidly spreading and pervasive opinion in the
industry, it really being a serial-port issue and not a smartcard
issue (and turn-around all the programs being abandoned, including the
EU countermeasure standard for compromised PC/end-point). misc.
past posts mentioning EU FINREAD
http://www.garlic.com/~lynn/subintegrity.html#finread

from above:
So here's my proposal: Set up a separate system just for the sole
purpose of online banking.

... snip ...

In the 95/96 time-frame when dialup online consumer banking said they
were moving to the internet (largely motivated by the large customer
support costs related to supporting serial-port modems), the dialup
online commercial/cash-management operations were saying they would
never move to the internet for a long list of reasons (lots of which
were are still seeing) ... although those operations eventually moved
to the internet anyway.

One of the major identified vulnerabilities was compromised end-point
... to the extent that EU in the late 90s, developed a countermeasure
standard to combat compromised end-point ... but for various reasons
it never got deployed and was abandoned. Since that time, there have
been periodic proposals that companies used dedicated PCs for online
banking ... that are *NEVER* used for any other purpose (as partial
return to the days of online dialup, before the internet)

from above:
America is worse off than it was 30 years ago -- in infrastructure,
education and research. The country spends much less on infrastructure
as a percentage of gross domestic product (GDP). By 2009, federal
funding for research and development was half the share of GDP that it
was in 1960. Even spending on education and training is lower as a
percentage of the federal budget than it was during the 1980s.

... snip ...

in tandem memos from 1980 ... it was blamed on rise of MBAs and focus
on qtr numbers.

Math, leverage and risk
http://www.atimes.com/atimes/Global_Economy/NF20Dj03.html
Benoit Mandelbrot, in his 2004 The Misbehavior of Markets, had pointed
them out with mathematical elegance we could not hope to match
(Mandelbrot had pointed out flaws in the emerging underlying theory as
early as 1962).

Mendelbrot description of period from 60s through the last decade was
(economists) continuing to use same computations even when they are
repeatedly shown to be wrong. some of Mendelbrot's references are
similar to this (by nobel prize winner in economics)

Thinking Fast and Slow
http://www.amazon.com/Thinking-Fast-and-Slow-ebook/dp/B00555X8OA
Since then, my questions about the stock market have hardened into a
larger puzzle: a major industry appears to be built largely on an
illusion of skill. Billions of shares are traded every day, with many
people buying each stock and others selling it to them

... snip ...

somebody on bloomberg tv news late thursday was trying to analyze
IBM's financial report. he made reference an unexplained onetime $200m
profit item. He also said that there was extensive borrowing and stock
buyback as part of propping up the stock price ... question was how
much is executive bonus tied to stock price (being motivation for
stock buyback).

you must be thinking of some other wheeler ... they offered me 1st
line manager a year out of school. I asked to read the manager's
manual over the weekend. I came back on monday and said I was foreman
on 30 person construction crew my 1st yr in college ... and that
management experience is incompatible with whats in IBM manager's
manual. nobody ever made the offer again.

one of my favorite stories about management ... the first time I
scheduled Boyd's briefing at IBM, I tried to do it through employee
education. At first they agreed, but after I provided more information
about the briefing they changed their mind. They said IBM spends a lot
of resources training managers in how to handle employees and that
exposing general employees to Boyd's briefing would be counter
productive. I should restrict the audience to only senior members of
competitive analysis departments.

Boyd would say he predicted effort on Ho Chi Minh Trail would fail
... even before he was sent over to spook base. Reading between the
lines in Coram's Boyd biography implies that McNamara was significant
improvement on the Pentagon culture of the period (including Sprey
being one of the "whiz kids") ... even if that still left a lot of
room for improvement.

I met Boyd in 1983, when I first sponsored his Patterns Of Conflict
briefing at IBM.

Boyd: The Fighter Pilot Who Changed the Art of War (Robert Coram)
pg268/loc4681-86
The Ho Chi Minh Trail was a network of trails and dirt roads that
formed the main route by which North Vietnamese forces operating in
South Vietnam were resupplied by cargo-carrying bicycles and small
trucks. Seeding the trail with sensors had been the idea of Defense
Secretary McNamara's R&D technocrats, and the project became known as
the "McNamara Line." The $2.5 billion operation was a huge windfall
for IBM. The technocrats convinced McNamara that if the trail were
wired--as one Task Force Alpha worker said, like a "pinball
machine"--the supply chain could be broken and America could win the
war. This was America's first electronic battlefield. It was one of
the most highly classified operations of the Vietnam War.

pg274/loc4802-4
Boyd also dealt with situations of great consequence. He said the
McNamara Line was an expensive failure and shut it down. He claimed
that a four-star general later told him he was sent to NKP solely
because Pentagon generals knew he was the only man in the Air Force
with the guts to close down the boondoggle.

... snip ...

There have been references that the signal processing couldn't tell
the difference between an elephant and large troop movements ... as a
result there was a significant number of elephant bombing strikes.

Boyd: The Fighter Pilot Who Changed the Art of War (Robert Coram),
other comments about nkp ... pg266 /loc4655-61
The army had a heavily guarded compound from which the curiously named
Studies and Observation Group (SOG) launched some of the most daring
and still-secret activities of the war.

not so implicit (Coram) reference to mcnamara's effect -
pg196/loc3453-57
Even among McNamara's Whiz Kids--the highly educated and
extraordinarily bright young men brought into the Building with a
mandate to impose rational thought on both the military and the
military budget--Pierre Sprey stood out.

... and pg290/Loc5060-63:
TacAir was part of the old Systems Analysis office, the home of the
Whiz Kids. Under McNamara, TacAir had been extremely powerful because
it confronted the Air Force and Navy and made them prove why each
program was needed. It thus had great influence on which proposed Air
Force programs made it into the budget. Not surprisingly, the military
loathed Systems Analysis so much that the name was changed to Program
Analysis & Evaluation (PA&E).

... snip ....

--
virtualization experience starting Jan1968, online at home since Mar1970

Coming soon: a drone for all theaters
http://www.atimes.com/atimes/Global_Economy/NG20Dj04.html
They saw their first large-scale employment in the Vietnam War,
albeit in secret without the attendant publicity of today's "drone"
operations. Launched from United States Air Force (USAF) DC-130s
operating out of South Vietnam and later Thailand, America's "Firefly"
drones flew 3,425 missions over North Vietnam and along the
Sino-Vietnamese border between 1964-1975.

... snip ...

"Boyd: The Fighter Pilot Who Changed the Art of War" (Robert Coram),
pg337/loc5844-47:
But Eisenhower did not understand this kind of conflict and, at the
very moment of victory--egged on by jealous and conventional British
officers--he grew afraid for Patton's flanks and supply lines and
ordered Patton to stop. The Germans were amazed at the respite. One
school of thought says that Eisenhower's timidity cost another six
months of war and a million additional lives.

the referenced paper:
http://www.ndu.edu/press/technological-strategy.html
... and within the Armed Forces the often controversial yet gifted
Colonel John Boyd, who was able to articulate and effectively
propagate his revolutionary vision of energy maneuverability

Anne & Lynn Wheeler <lynn@garlic.com> writes:
A couple years ago, I participated in taking some newer tachnology to an
financial industry association that addressed all the problems from the
90s ... reducing straight-through processing performance to only 3-5
times that of overnight batch cobol, instead of 100 times. Simulated
transactions on modest sized cluster showed ability to handle the
equivalent of full day of (straight-through processing) transactions for
a very large institution in approx. an hour of elapsed time (easily
handling daily peak-load with plenty of capacity to spare).

Initially the (mostly technical) members of the financial association
showed a great deal of interest ... but then all activity was
halted. One of the offline comments was that among the business people
at the member institutions ... the scars from the failures of the 90s
were still too fresh ... and it was going to take quite a bit more time
before shifting away from the risk adverse culture that had resulted
(from those failures).

Kopp paper is a warning to U.S. leadership
http://elpdefensenews.blogspot.com/2012/07/kopp-paper-is-warning-to-us-leadership.html

reference this paper

Technological Strategy in the Age of Exponential Growth
http://www.ndu.edu/press/technological-strategy.html

which cites Amdahl on parallelization:
When a processor is not fast enough to solve a problem, the most common
solution is to employ more than one processor--a technique known as
parallel processing whereby the computing workload is split across
multiple processors. Unfortunately, not every type of computation can be
easily split up to permit faster computation. The optimism surrounding
the use of computational clouds and other highly parallel systems is
frequently unrealistic, as such systems will not realize any performance
gain if the problem to be solved does not "parallelize" readily. This has
been understood by computer scientists since Gene Amdahl published his
now famous 1967 paper.

... snip ...

the paper also references Boyd:
... and within the Armed Forces the often controversial yet gifted Colonel
John Boyd, who was able to articulate and effectively propagate his
revolutionary vision of energy maneuverability.

locks, semaphores and reference counting

When Charlie was working on fine-grain multiprocessor locking for cp67
at the science center, he invented compare&swap (the instruction name
was chosen because CAS are Charlie's initials).

Attempts to get the instruction included in 370 were initially
rebuffed, the "owners" of 370 architecture saying that the POK
favorite son operating system people claiming that test&set (from 360)
was sufficient for multiprocessor operation. Their direction to get
compare&swap included in 370 would require coming up with
non-multiprocessor specific uses for the instruction.

We came back with the compare&swap uses for large, multithreaded
applications that would use for atomic updating. A large DBMS running
enabled for interrupts would normally have to resort to kernel calls
for atomic operations & serialization. With compare&swap a lot of the
operations could be performed inline (independent of running on single
process or multiprocessor). Some of the examples then were added to
the mainframe principles of operation ... and still appear in current
POP 40yrs later.

When RDBMS were being ported to RS/6000 ... compare&swap semantics had
migrated to lots of other hardware platforms and when available were
being used by lots of the RDBMS implementation. RS/6000 (RIOS) didn't
have any such operation ... and so RDBMS ported to RS/6000 UNIX AIX
had to use kernel calls instead, with associated throughput
degradation. In part because RS/6000 didn't provide multiprocessor
operation, a compare&swap instruction simulation was eventually
provided in the kernel first-level interrupt handler (running disabled
for interrupts, provided simulation of atomic operation on single
processor hardware).

At the same time that original relational/sql implementation was going
on ... System/R ... I also got into also doing parts of semantic
network DBMS implementation ... using lots of stuff from Sowa (also at
IBM same time as Codd). RDBMS doesn't have referential integrity
... because of the (implied) one-directional references. The SNDBMS
directly instantiated direct pointer/links and part of the
implementation was guaranteeing all pointer/links were bidirectional.

There was some contention between the IMS group (with directly
instantiated one-way pointers) and RDBMS with implied one directional
pointers using values and multi-table lookups. The IMS group
criticized the RDBMS group since the indexes supporting value lookup
typically doubled the amount of required disk space (compared to same
data loaded into IMS). RDBMS group responded that the value/index
lookup eliminated a whole lot of manual administrative operations
required in IMS (associated with exposed pointer values). The SNDBMS
used a different kind of value-based pointer operations ... requiring
index infrastructure ... but with explicit relationships between every
item and every other item (rather than implied as in RDBMS with table
indexes), there was enormous forest of indexes ... both for forward
and backwards (among other things, guaranteeing referential integrity)
http://www.garlic.com/~lynn/submain.html#systemr

I maintain RFC index and merged taxonomy&glossary information in such
a SNDBMS and then periodically rebuild the HTML files that appear at
garlic.com ... using HREFs to try and simulate bi-directional
links. For at least a decade, the HTML pages would get a couple
thousand hits a day by the major search engines every day
... appearing to use the pages as daily regression test because of the
very high ratio of HREFs to file sizes.
http://www.garlic.com/~lynn/index.html

--
virtualization experience starting Jan1968, online at home since Mar1970

Slackware

Dan Espen <despen@verizon.net> writes:
On mainframes, we OFTEN printed dumps.
It was a lot easier to dog ear a few pages and be able to circle
suspicious areas.

Sometime in the 80s the dumps became way too large for that approach.
I remember seeing a few printed out in the 10 inch to 12 inch tall
category.

Now we know that SYSUDUMP starts each line with an address that is a
multiple of 32 so if you are looking for location in R15 which
contains A01C23D4 and you have DISP CC set you search for 201C23C in column 2.

I cut and paste snippets into a README.problemname file to build up
a composite view of the problem.

IPCS for vm370 started appearing in the 70s ... running display on 3270
w/o needing printing. IPCS was large application written in assembler.

early in rex days (well before released as rexx product to customers), I
wanted to demostrate that it wasn't just another pretty scripting
language. My demonstration was complete re-implementation of IPCS
... done in half time in less than 3months, with ten times the function
and running ten times faster (little hack since rex is interpreted).

I finished early ... so started implementating library that would
automatically examine dumps for various kinds of failure signatures.

after rexx release and company heading into OCO-wars ... I thot my ipcs
replacement would be released to customers ... especially since it was
in use by almost every internal datacenter as well as nearly every
customer PSR (program service representative). The number of internal
datacenters wasn't insignificant ... the internal network ... some
past posts
http://www.garlic.com/~lynn/subnetwork.html#internalnet

was larger than the arpanet/internet from just about the beginning until
late '85 or possibly early '86. at the time of arpanet great change over
to internetworking protocol on 1Jan1983, there was on the order of 250
host and 100 IMP nodes ... 1983 was when the internal network passed
1000 host-nodes ... past reference
http://www.garlic.com/~lynn/2006k.html#8

For whatever reason, release to customers never happened ... but I did
manage to get permission to do presentations at various user group
meetings ... going into detail about how I had done the implementation
... within a few months similar implementations were starting to appear
at customer sites and other vendors.

dumprx could work off live cms & vm370/cp kernel storage ... although it
wouldn't freeze the storage.

i never did get around to doing much in switch to xa. one of the things
that could have been done with xa access registers ... was accessing
other virtual address spaces ... aka use access registers to temporary
suspend address space in another virtual address space ... examine
things and then resume operation

dumprx could either run as line-mode terminal session or as a xedit
macro ... the dumprx session effectively was an xedit "file" ... so that
both dumprx and full xedit capability were useable (including other rex
macros) ... as well as things like "restarting" session.

I wrote a mini-decompiler in the rex code ... give it macro dsect
library and it would format areas of storage as per member specification
from macro library.

--
virtualization experience starting Jan1968, online at home since Mar1970

"Joe Morris" <j.c.morris@verizon.net> writes:
The vertical spacing was controlled by the clutch knob in the printer; I
never heard of any standard trains that were designed for a specific
vertical increment. The glyph height was (IIRC) enough less than 1/8 inch
so that you didn't wind up with glyphs on one line merging with ones the
lines above and below, although readability suffered.

Some shops made 1/8in the standard spacing for their SYSPRINT queues, but
most concluded that the reduction in the amount of paper consumed wasn't
worth the reduced readability of dense printouts at 1/8in.

That's not to say that some shop might have bought a customized train with a
smaller-than-standard glyph height, but I would expect that to be
prohibitively expensive.

Los Gatos VLSI lab had special train for printing logic diagrams
sideways (& "dense" printing) on 1403n1 ... needed characters so that
boxes and lines appeared continuous. application could be used with
standard print train ... but lines wouldn't be solid/continuous.
Application was also sometimes used to print internal network diagram
... i.e. nodes were boxes and lines were the links that connected nodes.
I may still have such a copy printed at hone approx. 300(?) or so nodes
... I would have to find it to double check ... I may even have archived
post mentioning it) ... found archived post (references 4/15/77 print
... but not number of nodes)
http://www.garlic.com/~lynn/2002j.html#4

early on mainframe principles of operation was moved to cms script file.
actually it was the architecture "redbook" (for the red 3-ring binder it
was distributed in). There were conditional controls in the file that
printed either the full architecture redbook ... lots of engineering
notes, feature justification, discussion of alternatives, etc ... or the
principles of operation subset (version selection with command line
parameter, whether redbook or POP). when printed on 1403n1 ... some of
this can be seen in principles of operation ... where the diagram boxes
didn't have solid/continuous lines.

science center developed an application that traced instruction and
storage fetch/store addresses. "plotting" was done on 1403 with address
vertical and time horizontal ... addresses were scaled to about 7ft
length of 1403 output and time scaled could be 20-30ft. The paper
assembled on some of the interior hallways of the science center.

One of the uses was looking at how to redo apl storage management.
science center had ported apl\360 to cp67/cms for cms\apl. standard apl
storage management allocated new storage location on every assignment.
when all storage (in workspace) was exhausted, it would do garbage
collection (compacting inuse storage) and start all over again. apl\360
with 16kbyte workspace that was completely swapped as single entity
... didn't make any difference. Moved to multiple megabyte virtual
storage, demand paged environment ... resulted in severe page thrashing.
plot along the hallways was strong sawtooth pattern ... relatively rapid
rise from low storage to high followed by sharp "tooth blade" edge (as
garbage collected) ... repeated numerous times as moved down the hall.

application also did semi-automated program reorganization as aid in
improving preformance in virtual memory, demand paged environment. It
was used by lots of internal development group (like IMS) for moving
from real-storage (os/360) to virtual memory environment (as well as
identifying execution "hot-spots"). It was eventually released to
customers as VS/Repack in spring of 1976.

Walter Banks <walter@bytecraft.com> writes:
The essence of the internet was pretty much in place by the time of the
Sussex NATO conference in September 1973 (Three years before the
Saltzer visit to DARPA). This link points to the participants list at that
conference. The inverntor(s) of the internet are essentially all within that list.

from above:
In 1971, Norman Rasmussen, founder and manager of IBM's Cambridge
Scientific Center, asked Hendricks to find a way for the CSC machine to
communicate with machines at IBM's other Scientific Centers. Hendricks
and Tim Hartmann, of the IBM Technology Data Center in Poughkeepsie, NY,
produced RSCS, which went into operation within IBM in 1973. RSCS was
later renamed and released to IBM customers as the VM/370 Networking
PRPQ in 1975.[3][4] The importance of this subsystem as a component of
VM is described by Robert Creasy.[5]

Part of the issue was that while vnet native protocol was in use earlier
... the implementation had to be done in layered way ... so that it
could transparently interoperate with the other major internal
networking protocol hasp/jes2.

Internally between mostly campus hasp systems, they were running some
support that came from triangle university ("TUCC" in cols 68-71
source code). The implementation was intertwined with standard HASP
support and not cleanly layered ... and node definitions were done by
taking empty entries in the HASP psuedo-device table (255 entry table
used for hasp for psuedo unit-record devices ... typical HASP
installation might have 60-80 entries in use ... so the TUCC code
could define up to 170-190 network nodes).

The VNET code had to be cleanly layered with gateway-like
functionality and support both native VNET drivers as well as gateway
drivers that would talk to HASP/JES2. As the HASP/JES2 evolved, it
became even more convoluted ... since the HASP/JES2 network support
code was so intertwined with rest of its operations ... traffice
between two different HASP/JES2 nodes at different releases could
result in HASP/JES2 crash bringing down the whole operating system.

Internally, the VNET gateway function had to be expanded so that there
were large library of HASP/JES2 drivers ... with the specific driver
started that corresponded to the HASP/JES2 level at the other end of
the link. It became the responsibilty of the VNET HASP/JES2 drivers to
convert traffic into a canonical form and then translate into the
specific form required by the HASP/JES2 on the other end of the link
(eventually HASP/JES2 systems couldn't be trusted to directly
communicate with each other, requiring intermediate VNET nodes
... unless the installation tightly synchronized all the release
levels).

Internal network also quickly exceeded the 170-190 HASP/JES2
limitation ... and HASP/JES2 implementation would also discard traffic
if either the origin node or the destination node wasn't in its local
table. The combination of all the factors, pretty much limited
HASP/JES2 to boundary nodes.

At the 1jan83 great change-over from arpanet to internetworking
protocol there were possibly 100 IMP nodes and around 255 hosts
... 1983 was when the internal network passed 1000
nodes. Internal network was larger than arpanet/internet from just
about the beginning until possibly late '85 or early '86. Part of the
reason was requirement for IMP nodes which was pretty limited
... 1jan83 moved to internetworking protocol but also eliminated the
IMP requirement.

Ed's wiki entry references this old post of mine ... which give some
of the internal network 1983 activity ... including a list of all
corporate locations that had one or more new network nodes added
during 1983.
http://www.garlic.com/~lynn/2006k.html#8

Wonder what happened to Sarbanes-Oxley and SEC? Possibly even GAO
started to think SEC wasn't doing anything and started doing reports
of public company fraudulent financial filings (even showing uptic
after SOX).

rhetoric during passage of sarbanes-oxley was that auditors and
executives would do jail time if financial filings weren't
correct. More serious people just claimed that it was full-employment
gift for auditors and that possibly the only part that might make any
difference was the whistle-blower section

Middle of last decade, I was at EU conference of corporate CEOs and
exchange presidents on subject of SOX audit costs leaking into
Europe. I just repeated comments that the whole audit thing was
primarily a gift to the audit industry.

During congressional hearings into Madoff, the person that had tried
unsuccessfsully for a decade to get SEC to do something about Madoff
(Madoff turned himself in, which finally forced SEC to do something)
testified that whistle-blowers turn up 13 times more fraud than audits
... and that the SEC didn't have a TIP hotline ... but SEC did have a
hotline for corporations to complain about audits.

from Confidence Men
http://www.amazon.com/Confidence-Men-Washington-Education-ebook/dp/B0089LOKKS
... the economic "A-team" helped get the president elected and in the
"japan-or-sweden" solution they were going to choose "sweden" ... but
they were also going to hold those on wallstreet accountable ... the
president then appoints the "B-team" which selects "japan" solution
(many also participated in the bubble and were not going to hold those
responsible accountable)

...

aka the people selected to fix the situation were chosen from the same
crowd responsible for the problem

I was going to get something like $20M from NSFNET for NSFNET backbone
... we already had T1 and faster links running internally. Then the
budget got cut and plans for the NSFNET backbone got re-orged ... some
amount of what went on in this old email
http://www.garlic.com/~lynn/lhwemail.html#nsfnet

a NSFNET backbone T1 RFP was released (calling for T1 links) and
internal politics prevented us from bidding. The director of NSF wrote
the company a letter, copying the CEO, trying to help ... but that just
made the internal politics worse (as did comments about what we already
had running was at least five yrs ahead of all bid submissions).

The final winning bid was only able to put in 440kbit/sec links (not T1)
... and then somewhat to try and meet the letter of the RFP put in T1
trunks with telco multiplexors (running multiple 440kbit/sec links over
T1 trunks) ... I would make derogatory references that they might be
able to call it a T5 network since some of the T1 trunks may have, in
turned, be multiplexed over T5 trunks.

The communication group was also generating a lot of mis-information
about SNA applicable to NSFNET T1 backbone ... even tho SNA products
only had support for 56kbit/sec link. somebody collected a bunch
of communication group mis-information and redistributed ... small
part reproduced here
http://www.garlic.com/~lynn/2006w.html#email870109

We were having some custom hardware built on the other side of the
pacific and friday before I was to make a visit, the communication
group sent out an announcement for a new "high-speed" discussion
group with the following definitions:
low-speed: <9.6kbits
medium-speed: 19.2kbits
high-speed : 56kbits
very high-speed: 1.5mbits

Monday morning in conference room on the other side of the pacific
was the following definitions:
low-speed: <20mbits
medium-speed: 100mbits
high-speed: 200-300mbits
very high-speed: >600mbits

it was rather interesting since the communication group was claiming
customers didn't need/want T1 until sometime in the 90s. They had done
study of customer 37x5 "fat pipes" (multiple parallel 56kbit links
simulating faster single link. They plotted number of customer 2-link,
3-link, 4-link, etc and found it dropped to zero by six-links (aka six
parallel 56kbit links) ... justification for communication group not
having products supporting faster than 56kbit/sec. What they failed to
mention was most telcos tariffed single T1 link at about the same as
five or six 56kbit links. Customers wanting more than about 200kbits
just got real T1 link and switched to support from some other vendor
(trivial survey turned up 200 such T1 customers at time when
communication group was claiming no customer wanted T1 for another
6-8yrs).

later the communication group cobbled together 3737 kludge, sort of able
to do T1 ... it would simulate a local channel-to-channel and would
immediately do ACKs to host vtam ... as if traffic had already reached
destination ... spoofing the host vtam trying to reach T1 thruput. a
couple recent posts with old email from 1988 discussing 3737:
http://www.garlic.com/~lynn/2011g.html#75 We list every company in the world that has a mainframe computer
http://www.garlic.com/~lynn/2011g.html#77 Is the magic and romance killed by Windows (and Linux)?

--
virtualization experience starting Jan1968, online at home since Mar1970

jcewing@ACM.ORG (Joel C. Ewing) writes:
How many of the web sites you visit on a daily basis are something
other than a university or a government research facility? How many
of the people that you regularly communicate with on the Internet are
not at one of those facilities, and for that matter, are you in the
set of people not at those facilities? That's how much of the
Internet would be missing (99.99% +) if legislation in 1992 had not
opened up this government military/research network to commercial use.

The government ARPA-net became the Internet we know today because Al
Gore recognized its potential and pushed legislation, first in 1988 to
help link universities and libraries, and additional legislation in
1992 which opened it to commercial traffic. Probably someone else
would have eventually done so if he hadn't, but maybe not for another
decade or more; or maybe enough of Congress would have been bought by
a major TelCom for them to have been granted an exclusive monopoly on
the Internet, totally changing its character. No one else in Congress
was pushing for expanded information access at the time. That's why
Vint Cerf gives Al Gore credit.

regarding the $20M ... the original was coming out of the supercomputing
center funding to link together the centers. then things changed when
that budget got cut. old email about getting $20M (before the cut)
http://www.garlic.com/~lynn/2011e.html#email860915b

the RFP was finally awarded 24Nov1987 for $11.2M (but as previously
referenced, I was prevented from bidding ... even over objections
of director of NSF).

I've mentioned several times there were additional reasons for the
non-commercial AUPs (acceptable use policies). At the time telcos had
huge fixed cost covered by use tarriffs (including bytes transferred).
There was enormous amount of (unused) dark fiber ... but there was big
chicken&egg situation. Without drastic reduction in use charges, there
weren't going to be the appearance of the big bandwidth hungry
applications. Straight significant reduction in the use charges, it
might be a decade before the use reached a level that covered fixed
operating costs (i.e. large deficit operating in the red).

Estimate was that actually closer to $50M was provided in various
bandwidth for the NSFNET backbone ... for closed technology incubator
(spurring growth of bandwidth hungry applications) ... with the
non-commercial AUP eliminating any commerical paying traffic moving
over.

--
virtualization experience starting Jan1968, online at home since Mar1970

I was going to get something like $20M from NSFNET for NSFNET backbone
... we already had T1 and faster links running internally. Then the
budget got cut and plans for the NSFNET backbone got re-orged ... some
amount of what went on in this old email
http://www.garlic.com/~lynn/lhwemail.html#nsfnet

a NSFNET backbone T1 RFP was released (calling for T1 links) and
internal politics prevented us from bidding. The director of NSF wrote
the company a letter, copying the CEO, trying to help ... but that just
made the internal politics worse (as did comments about what we already
had running was at least five yrs ahead of all bid submissions).

The final winning bid was only able to put in 440kbit/sec links (not T1)
... and then somewhat to try and meet the letter of the RFP put in T1
trunks with telco multiplexors (running multiple 440kbit/sec links over
T1 trunks) ... I would make derogatory references that they might be
able to call it a T5 network since some of the T1 trunks may have, in
turned, be multiplexed over T5 trunks.

The communication group was also generating a lot of mis-information
about SNA applicable to NSFNET T1 backbone ... even tho SNA products
only had support for 56kbit/sec link. somebody collected a bunch of
communication group mis-information and redistributed ... small part
reproduced here
http://www.garlic.com/~lynn/2006w.html#email870109

We were having some custom hardware built on the other side of the
pacific and friday before I was to make a visit, the communication group
sent out an announcement for a new "high-speed" discussion group with
the following definitions:
low-speed: <9.6kbits
medium-speed: 19.2kbits
high-speed : 56kbits
very high-speed: 1.5mbits

Monday morning in conference room on the other side of the pacific was
the following definitions:
low-speed: <20mbits
medium-speed: 100mbits
high-speed: 200-300mbits
very high-speed: >600mbits

it was rather interesting since the communication group was claiming
customers didn't need/want T1 until sometime in the 90s. They had done
study of customer 37x5 "fat pipes" (multiple parallel 56kbit links
simulating faster single link. They plotted number of customer 2-link,
3-link, 4-link, etc and found it dropped to zero by six-links (aka six
parallel 56kbit links) ... justification for communication group not
having products supporting faster than 56kbit/sec. What they failed to
mention was most telcos tariffed single T1 link at about the same as
five or six 56kbit links. Customers wanting more than about 200kbits
just got real T1 link and switched to support from some other vendor
(trivial survey turned up 200 such T1 customers at time when
communication group was claiming no customer wanted T1 for another
6-8yrs).

later the communication group cobbled together 3737 kludge, sort of able
to do T1 ... it would simulate a local channel-to-channel and would
immediately do ACKs to host vtam ... as if traffic had already reached
destination ... spoofing the host vtam trying to reach T1 thruput. a
couple recent posts with old email from 1988 discussing 3737:
http://www.garlic.com/~lynn/2011g.html#75 We list every company in the world that has a mainframe computer
http://www.garlic.com/~lynn/2011g.html#77 Is the magic and romance killed by Windows (and Linux)?

regarding the $20M ... the original was coming out of the supercomputing
center funding to link together the centers. then things changed when
that budget got cut. old email about getting $20M (before the cut)
http://www.garlic.com/~lynn/2011e.html#email860915b

the RFP was finally awarded 24Nov1987 for $11.2M (but as previously
referenced, I was prevented from bidding ... even over objections
of director of NSF).

I've mentioned several times there were additional reasons for the
non-commercial AUPs (acceptable use policies). At the time telcos had
huge fixed cost covered by use tarriffs (including bytes transferred).
There was enormous amount of (unused) dark fiber ... but there was big
chicken&egg situation. Without drastic reduction in use charges, there
weren't going to be the appearance of the big bandwidth hungry
applications. Straight significant reduction in the use charges, it
might be a decade before the use reached a level that covered fixed
operating costs (i.e. large deficit operating in the red).

Estimate was that actually closer to $50M was provided in various
bandwidth for the NSFNET backbone ... for closed technology incubator
(spurring growth of bandwidth hungry applications) ... w/o the
non-commercial AUP eliminating any commerical paying traffic moving
over.

--
virtualization experience starting Jan1968, online at home since Mar1970

another part of the issue was that RSCS had native vnet drivers and then
NJI (hasp/jes2) drivers. During period that BITNET was growing in the
mid-80s, they stopped shipping the native vnet drivers ... leaving only
the NJI drivers ... although the native vnet drivers continued to be
used on the internal network because they were much more efficient
... at least up until the change-over of the internal network to SNA in
the late 80s.

arpanet used IMPs for network nodes that did packet-based communication
... but the connected hosts did host-to-host end-to-end connection
protocol. In the a.f.c. thread it was pointed out that even by 1975, it
was recognized that it wasn't scaling. A comparison from the period was
post-office analogy ... to get something from new york city to fairbanks
alaska ... required that all the post offices between NYC and fairbanks
and alaska to be up and operational simultaneously ... which wasn't a
requirement for RSCS. RSCS traffic would eventually get from NYC to
fairbanks ... even if there was only intermediate connectivity between
the intermediate nodes (including if there was *never* full end-to-end
connectivity).

For lots of reasons, the internal network was larger than the
arpanet/internet from just about the beginning until either late '85 or
early '86 ... the internet growth and passing internal network primarily
because of the switch-over to internetworking protocol on 1jan1983.

At the time of the 1983 switch-over there were approximately 100 IMP
nodes and possibly 255 hosts ... while the internal network was in the
process of passing 1000

even though by that time, it would have been much more efficient and
cost-effective to have converted rscs drivers to tcp/ip (in much the
same way that was done for "bitnet-II").

the vm370 tcp/ip product was available ... even tho there was some
performance issues (limited to about 44kbytes/sec using nearly whole
3090 processor) ... but I would be shortly be doing the changes to
support rfc1044 ... and in some tuning tests at cray research got
channel thruput between 4341 and cray using only modest amount of 4341
processor (possibly 500 times improvement in bytes moved per instruction
executed) ... some past posts
http://www.garlic.com/~lynn/subnetwork.html#1044

note ... later the vm370 tcp/ip product was ported to mvs by adding
simulation for some of the vm370 functions.

piece of recent post from a.f.c. about the requirement for doing
NJI drivers in RSCS:

Internally between mostly campus hasp systems, they were running some
support that came from triangle university ("TUCC" in cols 68-71 source
code). The implementation was intertwined with standard HASP support and
not cleanly layered ... and node definitions were done by taking empty
entries in the HASP psuedo-device table (255 entry table used for hasp
for psuedo unit-record devices ... typical HASP installation might have
60-80 entries in use ... so the TUCC code could define up to 170-190
network nodes).

The VNET code had to be cleanly layered with gateway-like functionality
and support both native VNET drivers as well as gateway drivers that
would talk to HASP/JES2. As the HASP/JES2 evolved, it became even more
convoluted ... since the HASP/JES2 network support code was so
intertwined with rest of its operations ... traffice between two
different HASP/JES2 nodes at different releases could result in
HASP/JES2 crash bringing down the whole operating system.

Internally, the VNET gateway function had to be expanded so that there
were large library of HASP/JES2 drivers ... with the specific driver
started that corresponded to the HASP/JES2 level at the other end of the
link. It became the responsibilty of the VNET HASP/JES2 drivers to
convert traffic into a canonical form and then translate into the
specific form required by the HASP/JES2 on the other end of the link
(eventually HASP/JES2 systems couldn't be trusted to directly
communicate with each other, requiring intermediate VNET nodes
... unless the installation tightly synchronized all the release
levels).

Internal network also quickly exceeded the 170-190 HASP/JES2 limitation
... and HASP/JES2 implementation would also discard traffic if either
the origin node or the destination node wasn't in its local table. The
combination of all the factors, pretty much limited HASP/JES2 to
boundary nodes.
http://www.garlic.com/~lynn/submain.html#hasp

...

note that JES2 eventually did expand support to 999 nodes ... but that
was only after the internal network had passed 1000 nodes ... some
reference to internal network exceeding 1000 nodes in 1983 (also
referenced in the edson wiki entry):
http://www.garlic.com/~lynn/2006k.html#8

It was in the late 80s that the communication group was generating a lot
of mis-information about justification for converting internal network
to SNA ... as well as its applicability to internet (as previously
mentioned). It was also in this period that a senior disk engineer got a
talk scheduled at the world-wide, internal-only annual communication
group conference ... and opened the talk with the statement that the
communication group was going to be responsible for the demise of the
disk division. The scenario was that the communication group was
attempting to preseve its dumb terminal (vtam) paradigm (including
terminal emulation install base) and had stranglehold on the datacenter
(strategic "ownership" for everything that cross the datacenter walls);
the disk division was seeing the leading edge of data fleeing the
datacenter to more distributed computing friendly platforms in the
drop-off in disk sales. The disk division had come up with several
products to address the opportunity ... which were constantly being
vetoed by the communication group.
http://www.garlic.com/~lynn/subnetwork.html#terminal

--
virtualization experience starting Jan1968, online at home since Mar1970

printer history Languages influenced by PL/1

glen herrmannsfeldt <gah@ugcs.caltech.edu> writes:
Also, as I understand, 370/158s were reused inside the 3032
and 3033 for I/O. As a 370/158, it had microcode for both CPU
and channel operations. Running the channels for the 303x, it
only needed (probably new) channel microcode, to offload that
function from the CPU.

303x was a quick&dirty effort to get something out quick ... after
future system effort had failed ... during the FS period lots of 370
efforts were suspended and/or killed off (and is credited with allowing
clone processors to get market foothold). some past posts mentioning
FS
http://www.garlic.com/~lynn/submain.html#futuresys

303x channel director was 370/158 engine with only the integrated
channel microcode (and no 370 microcode)

3031 was two 370/158 engines, one dedicated to "channel director"
running just the integrated channel microcode and another engine running
just the 370 microcode (a 3031 multiprocessor being four 370/158
engines, two channel directors and two 370 processor).

3032 was 370/168 reconfigured to use channel director as its
external channels

3033 started out being 370/168 logic remapped to 20% faster chips that
also had 10times circuits/chip ... extra circuits originally gone
unused. during development there was some remapping of logic to
consolidate more onchip operations ... eventually getting 3033 up to
nearly 50% faster than 168. 3033 also used the channel director for its
i/o.

as an aside ... 4341 also had integrated channel ... but the 4341
processor engine was so much faster ... that with slight tweaks the 4341
supports 3380 3mbyte channel speeds ... something that the 370/158
engine was incapable of.

--
virtualization experience starting Jan1968, online at home since Mar1970

printer history Languages influenced by PL/1

hancock4 writes:
Did the 3031 actually use recycled circuits out of old 158's, or was
it new gear merely manufactured per 158 design and incorporated in a
bigger box?

From the description on the IBM website, it seems that the 3031 was
new technology given the improved storage and execution techinques.
http://www-03.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP3031.html

the 370/158 engine manufacturing line had achieved an astonishing level
of efficiency ... incremental cost of every additional 370/158 off the
line was unbelievably small (imagine some of the old numbers for
incremental cost of building additional automobile).

note 370/168-1 to 370/168-3 was similar doubling of cache size also
... but resulted in significant problems. the bit chosen for indexing
additional cache lines was the 2k bit ... and therefor when 370/168-3
was running in 2k-byte page mode (dos/vs, vs1) ... it only used half the
cache (because the 2k bit was now part of the virtual address ... all
the othere cache index bits were low-order bits which were the same in
both virtual and real address). As a result for dos/vs & vs1, 168-3 had
the same performance as 168-1. The problem was even worse for vm370
running either dos/vs &/or vs1 in virtual machine. vm370 normally used
4k-byte page mode logic ... which used the full 32kbyte cache. However,
when switching to dos/vs or vs1 virtual machine running with 2k-byte
pages ... it used 2k-byte "shadow" tables. In the switch between 4k-byte
and 2k-byte virtual page mode ... there would be full cache flush
because of the different mapping ... with dos/vs & vs1 running under
vm370 ... this would be happening at very high frequency ... some number
of vm370 customers (running dos/vs &/or vs1) complained upgrading from
168-1 to 168-3 ... performance degrading to worse than 168-1.

something similar would happen for doubling cache for 158-3 to 3031.

the remapping of 168-3 logic to newer/faster chips started out 20%
faster than 168-3 for 3033 .... but eventually some logic tuning got it
up to 50% faster ... 168-3 was approx. 3mip machine and 3033 was
approx. 4.5mip machine (depending on typical application cache hit
ratios).

as sowa's references 3081 was kludged FS machine using the FS machine
370 emulator (enormous increase in number of circuits compared to all
other 370 machine implementations regardless of the vendor).
http://www.jfsowa.com/computer/memo125.htm

First 3081 out the door was 3081-D with supposedly two five-mip
processors ... but lots of benchmarks had 3081-d processors running
slower than 3033. Then similar to 168-1 to 168-3, amount of cache was
doubled for 3081-K which supposedly resulting in two 7mip processors
(i.e. larger cache, results in lower cache miss / higher cache hit
ratios ... increasing the effective instruction execution rate ... aka
fewer instruction stall from cache miss waiting for storage). However,
some number of benchmarks had 3081-K processor about the same or ownly
slightly faster than 3033 processor.

--
virtualization experience starting Jan1968, online at home since Mar1970

Stokes@INTERCHIP.DE (David Stokes) writes:
The virus vulnerability (and number of spambots and DOS attack bots) on
the Internet is much more a function of the Operating Systems of the
user nodes connected to the Internet than of the Internet itself. Much
of the current problem stems from early MS Windows design philosophy,
which didn't take the Internet seriously and implicitly assumed
networking and data sharing would would only involve local networking
where all parties had benign intent; so, MS made it easy for machines to
share active content that could access and alter content on remote
machines or even initiate remote programs on other machines, and put the
integrity management burden on end users without providing any tools to
make management possible.

early days of desktop computing was stand-alone machines ... with some
number of applications (like games) evolving that effectively took over
whole machine. later small business (safe) local-area-networks also
evolved for desktop machines. in both these environments, desktop
machines didn't have any countermeasures for attacks or compromises.

for the small business, safe, local-area-networks ... convention
developed where automatic scripting (typically basic) was added to
application-specific (mostly business) data files ... these files would
be exchanged on the small business, safe, LAN environment ... where
applications would automatically execute the embedded scripts included
in the data files.

at the 1996 MSDC conference at Moscone ... all the banners were
proclaiming support for the internet ... however he subtheme in all the
sessions were "protecting your investment" ... basically paradigm of
automatic execution of embedded scripts in application data files would
continue ... and there would be simple retargeting of the small, safe
LAN support to the internet (with no additional countermeasures for
attacks or compromises)

I've periodically used the analogy of going out the airlock in open
space w/o a spacesuit.

Before he disappeared, Jim Gray had con'ed me into interviewing for
position of chief security architect in Redmond. The interview went on
over a period of serveral weeks but we were never able to come to
agreement ... i even used the above description describing the situation
(lack of countermeasures) during the interview process.

jwglists@GMAIL.COM (John Gilmore) writes:
The scientific community made early and significant use of the DARPA
predecessor of today's Internet, and almost none of the problems that
afflict us today emerged during that period. There was no money to be
made by chicanery, and little of it therefore occurred.

Things are now very different. The availability of millions of new
Internet dupes has spawned whole new classes of crime and greatly
facilitated others that are much older than it is.

note in the 95/96 time-frame industry presentations by online dialup
consumer banking were explaining move to the internet ... in large part
motivated by the large consumer support costs related to serial-port
dialup modems (able to offload to ISPs). at the same time the commercial
dialup banking/cash-management operations were saying that they would
never move to the internet ... for a long list of vulnerabilities.

late 90s, EU had FINREAD standard as countermeasure to a long list of
vulnerabilities related to internet-connected desktops ... including
compromised desktops.

some number of vendors were pushing hardware (chip) tokens for
authentication for many kinds of fraud. approx. start of the century,
one of the plastic magstripe payment cards included chip in the card and
provided free give-away of serial-port card readers. The enormous
customer support costs associated with serial-port card readers resulted
in rapidly spreading opinion in the industry that hardware tokens
weren't practical in consumer market. As a result there was
pullback/abandoning the consumer oriented chipcard-based programs in the
industry ... including the EU FINREAD effort.

We participated in after action review of the situation with some of the
people in redmond ... identifying the problem was with serial-port
devices ... not the chipcards. Apparently in few short years between
online dial-up banking moving to internet and the give-away serial-port
cardreaders, the institutional knowledge about the enormous serial-port
cunsumer support costs evaporated (which also was major motivation for
USB development).

Along the way, the online dialup commercial banking/cash-management did
move to the internet ... and the businesses have experienced all the
exploits and vulnerabilities previously predicated. A number of times in
the past decade, it has been recommended that businesses have a
dedicated PC for online banking that is *NEVER* used for any other
purpose (semi reverting to the days of online dialup banking)

hancock4 writes:
So, if I understand this correctly, they just merely kept building
more CPU units since it was cheap to do so, rather than re-using old
ones.

The workings of the cache and virtual addressing was amazingly
complex. How they could write an operating system and micro code to
track all the stuff is beyond me. I once got a book explaining the
internals of modern Z series and the complexity is beyond belief.
Kind of downer when you're running an old simple program against a
small file.

redoing manufacturing line can be major expense ... and the first one
off the line is really costly ... if you expense the whole line on the
price of that first unit. however, every incremental unit off the line
approaches the cost of the raw materials.

manual costs to refurbish an old unit can be greater than just running
automated line for new one.

modern day computer chips can have billions of circuits and enormously
complex and the upfront design cost to design such a new chip can be
significant. also, building new fab for latest smaller circuits can be
over billion dollars for each newer generation. then things are setup to
turn out billions of such chips ... once the intial investment is
recovered ... then incremental cost for each additional chip is
relatively small.

current generation, maximum configured z196 with 80 processor goes for
$28M and is rated at 50BIPS. by comparison, e5-2600 ($560,000/BIPS) is
rated at 527BIPS and IBM lists base price at $1815 for e5-2600 blade
(i.e. e5-2600 server blade has over ten times the processing power of
fully configured z196 and $3.44/BIPS).

I would claim that z196 and e5 chips are relatively equivalent
complexity and (incremental) manufacturing cost. Recent annual
"mainframe" revenue seems to about the equivalent of 130-140 fully
configured z196 systems.

The older Hardware school

nedbrek <nedbrek@yahoo.com> writes:
A room sized computer would either be a supercomputer (lots of cpus
and gpus, not many hard drives) or server farm (lots of storage with
just enough cpus to meet demand).

Both applications are "embarrassingly parallel" - doing lots of
different jobs with little or no interaction.

The limit for a single cpu is power (both delivery and heat
removal). IBM had some chips that ran at about 200 W, which is the
most you probably want to deal with. (By comparison, Intel is
targeting about 30 W, and ARM is like 1).

I'd have to look at SPEC, but figure the 200 W part is maybe 2x faster
than your 50 W desktop part (4x power for 2x perf).

The techniques in all modern cpus are the same as each other, and have
built on what came before. You just get more with more power budget.

latest fully configured z196 mainframe with 80 processors is rated at
50BIPS and goes for $28M ($560,000/BIPS). A e5-2600 is rated at 527BIPS
and IBM has based price list for e5-2600 blade at $1815 ($3.44/BIPS)

--
virtualization experience starting Jan1968, online at home since Mar1970

Stokes@INTERCHIP.DE (David Stokes) writes:
is highly dubious. All attempts to create security in computer
systems seem to be doomed as clever people find ways around them. The
Internet is more like a living organism that wants to live and expand
than a traditional piece of technology. As far as counterfactuals go
though, I'm actually pretty sure that with "planned transition" and
"oversight" we wouldn't have an Internet at all, just some more pipes
for advertising, "entertainment" and (mis)information.

in the 90s, the major (internet) exploit was from buffer overflow
vulnerabilities related to C-language programming convention for
handling strings. The vm/370 tcp/ip product implementation was done in
vs/pascal (earlier in thread, I mentioned having done rfc1044 support
for the product, getting possibly 500 times improvement in the bytes
moved per instruction executed) ... and had none of the buffer overflow
vulnerabilities found in c-language implementations. Multics operating
system was implementated in PLI and old security vulnerability
assessment found no buffer overflow vulnerabilities found in C-language
implementations. lots of past posts mentioning buffer overflow
vulnerability
http://www.garlic.com/~lynn/subintegrity.html#overflow

About a decade ago, the exploits had shifted to approx. 1/3rd buffer
overflow vulnerability (related to c-language features), 1/3rd automatic
scripting vulnerability (previously mentioned from 1996 Moscone MSDC),
and 1/3rd various forms of social engineering (enticing individuals to
executing malware applications which would install exploit code into
their machines). Earlier in the thread, I also mentioned in the 90s,
there was EU FINREAD standard that was countermeasure for malware
compromised internet-connected PCs (but various unfortunate
circumstances resulted in abandoning the effort).

Part of the issue is that there is a fundamental different security
paradigm for desktop machines that operate stand-alone and/or on small,
safe networks and require no security countermeasures (especially those
with heritage of applications, like games, that have convention of
taking over the machine) ... and internet appliances ... nearly
diamtetrically opposing security requirements (my early reference to
going out into open space w/o spacesuit).

I was trying to categorize CVE vulnerability&exploit reports. I talked
to the CVE people about suggestion for requiring more structure in the
reports ... but at the time, their response was they were lucky to even
get the unstructured descriptions.

John.McKown@HEALTHMARKETS.COM (McKown, John) writes:
What, no mention of CP/M-86? I don't think that MP/M ever had a x86
version. I do remember running Pick on my XT clone. Now that was a
weird beastie. And you totally ignored things like the Amiga. I loved
what I saw of that software. I wish now that my boss at the time
hadn't convinced me to go with an XT clone.

as undergraduate in the 60s, I was doing lots of operating system stuff
and even got requests from vendor to do certain things. I didn't learn
about those guys until a long time later ... but in retrospect, some of
the change requests were of the nature that they may have originated
from such organizations.

--
virtualization experience starting Jan1968, online at home since Mar1970