from above:
Blythe Masters (her very rare public appearance can be seen here)
suddenly finds herself in hot water for, among other things, allegedly
lying under oath, obstructing justice and "engaging in a systematic
cover up" to "approve schemes" seeking to defraud the states of
California and Michigan in electricity trading (Enron flashbacks are
more than welcome).

Quadibloc <jsavard@ecn.ab.ca> writes:
It's true enough that such machines as the Atlas, the ICL 1900, the
DEUCE, the Leo III, the Regencentralen GIER, and even the BESM-6 are
conspicuous by their absence, while IBM's later models are somewhat
over-represented...

Interestingly enough, the Control Data 6600 was included, but the Cray-
I was omitted... and neither Amdahl nor any of the other plug-
compatibles was noted.

After billions of (early 70) dollars, Future System implodes w/o being
announced ... and then there is mad rush to get products back into the
360/370 product pipelines.
http://www.jfsowa.com/computer/memo125.htm

3033 starts out being 168 remapped to some left-over FS 20% faster chip
technology ... followed by 3081 that is still more left-over FS
technology ... from above:
The 370 emulator minus the FS microcode was eventually sold in 1980 as
as the IBM 3081. The ratio of the amount of circuitry in the 3081 to its
performance was significantly worse than other IBM systems of the time;
its price/performance ratio wasn't quite so bad because IBM had to cut
the price to be competitive. The major competition at the time was from
Amdahl Systems -- a company founded by Gene Amdahl, who left IBM
shortly before the FS project began, when his plans for the Advanced
Computer System (ACS) were killed. The Amdahl machine was indeed
superior to the 3081 in price/performance and spectaculary superior in
terms of performance compared to the amount of circuitry.]

... snip ...

168 was enhancement to 165 with faster memory technology and better
optimization of the 165 horizontal microcode getting avg. machine cycle
per 370 instructions down to 1.6 from 2.1. In that sense, the next new
(high-end 370) processor after the 165 in 1970 (except for 168 & 3033
increments and 3081 FS left-over) was 3090 in 1986, more than 15 years
later.

At the end of the ACS-360 article there is section on features from
ACS-360 finally showing up in IBM ES/9000 announced in 1990.

--
virtualization experience starting Jan1968, online at home since Mar1970

announced with es/9000 in 1990 also fiber optic ESCON ... something that
had been kicking around POK for a decade or more. The rs/6000 had SLA
... which started out as escon ... but made full-duplex and approx. ten
percent faster ... along with significantly cheaper optical drivers.

I had been asked in 1988 to help LLNL standardize some serial technology
they had. The rs/6000 engineer that worked on proprietary 220mbit SLA
... wanted to turn around a do a 800mbit version. We convince him to
join the FCS standards committee and work on full-duplex 1gbit industry
standards (i.e. aggregate 2gbit, 1gbit concurrent in each direction).
In effect, could say that by the time ESCON finally made it out the
door, it was already obsolete.

The native FCS has complete I/O requests being sent down the outbound
path effectively as data ... and then actual data occuring
asynchronously ... with minimal end-to-end handshaking latency. This
dates back at least to the work I did on HYPERChannel in 1980 for
mainframe channel extender (in the case of moving 300 people IMS group
to offsite, the local 3270 channel attached terminals worked better over
HYPERChannel 1.5mbit/sec link ... the end users saw no difference and
the real IBM mainframe channel efficiency improved 10-15%). some
past posts
http://www.garlic.com/~lynn/subnetwork.html#hsdt

Some POK channel engineers eventually become involved and define an
extremely heavy-weight layer on top of FCS (that significantly cuts the
native throughput of FCS) ... which morphs into FICON.
https://en.wikipedia.org/wiki/FICON

for instance from above:
FICON uses two Fibre Channel exchanges for a channel - control unit
connection -- one for each direction. So while a Fibre Channel exchange
is capable of carrying a command and response on a single exchange, and
all other FC-4 protocols work that way, the response to a FICON IU is
always on a different exchange from the IU to which it is a response.

... snip ...

recent z196 peak I/O benchmark got 2M IOPS using 104 FICON ... while
recent FCS announced for e5-2600 claims over million IOPS on single FCS.

--
virtualization experience starting Jan1968, online at home since Mar1970

Anne & Lynn Wheeler <lynn@garlic.com> writes:
The native FCS has complete I/O requests being sent down the outbound
path effectively as data ... and then actual data occuring
asynchronously ... with minimal end-to-end handshaking latency. This
dates back at least to the work I did on HYPERChannel in 1980 for
mainframe channel extender (in the case of moving 300 people IMS group
to offsite, the local 3270 channel attached terminals worked better over
HYPERChannel 1.5mbit/sec link ... the end users saw no difference and
the real IBM mainframe channel efficiency improved 10-15%). some
past posts
http://www.garlic.com/~lynn/subnetwork.html#hsdt

Two major people behind cdc6600 are Cray and Thornton; Cray leaves and
forms Cray Research (supercomputer) and Thornton leaves and forms Network
Systems (that does HYPERChannel).

In 1980, Network Systems wants to release my mainframe channel extender
HYPERChannel software support. The people in POK playing with what gets
released as ESCON a decade later, get it veto'ed because they are afraid
that if its in the market, it might interfere with them being able to
get ESCON released.

IBM eventually does a mainframe tcp/ip product ... however, because the
communication group is fighting off client/server and distributed
computing (in defense of their dumb terminal paradigm and their dumb
terminal emulation install base) ... some past posts
http://www.garlic.com/~lynn/subnetwork.html#terminal

they claim that the LAN/channel interface for tcp/ip is under their
control ... it gets enormously more expensive and significantly slower
... it has about 44kbytes/sec using nearly full 3090 cpu.

I then do the rfc1044 enhancement to tcp/ip and in some tuning tests
between 4341 and cray at cray research, it gets sustained channel
throughput using only modest amount of 4341 processor (about factor of
500 times improvement in bytes moved per instruction
executed). misc. past posts mentioning rfc1044
http://www.garlic.com/~lynn/subnetwork.html#1044

--
virtualization experience starting Jan1968, online at home since Mar1970

tens of pictures of different computers with 2-4 paragraphs per computer
(series of web pages with one computer per page), starts with Harvard
Mark I through System z10 EC (article is from 2009)

maybe your browser isn't handling the web page(s)???

??? z10 EC page
The System z10 EC

While this article is supposed to be a history of big computers, this
last entry is about a computer that is still being sold today. But it
was sold yesterday too, and that's history, right? So, let's and take a
look at IBM's biggest and baddest computer on the planet, the System z10
EC.

... snip ...

z196 (next machine after z10) peak I/O benchmark doing 2M IOPS with 104
FICON (ficon is mainframe channel paradigm layer built on top of FCS
that significantly reduces throughput compared to base FCS)
.... compared to recently announced FCS for e5-2600 blade claiming over
million IOPS for single FCS
https://en.wikipedia.org/wiki/FICON

z196 (newer than z10) has 14 system assist processors for I/O will
handle up to 2.2M SSCH/sec with all SAPs running 100% percent CPU
... but recommendation is to limit SAPs to 70% CPU or 1.5M SSCH/sec. So
far, ec12 claims are that it will be able to do 30% more IOPS than z196.

Stockman in "The Great Deformation: The Corruption of Capitalism in
America" ... talks about stock buybacks are a mini-form of LBO, with
the executives reaping huge rewards, pg457/loc9844-46:

The leader was ExxonMobil, which repurchased $160 billion of its own
shares during 2004-2011. It was followed by Microsoft at $100 billion,
IBM at $75 billion, and Hewlett-Packard, Proctor & Gamble, and Cisco
with $50 billion each. Even the floundering shipwreck of merger mania
known as Time Warner Inc. bought back $25 billion.

... snip ...

goes into lots of detail about executives managing their bonuses tied to
stock price via stock buybacks (reduces number of shares so
earnings/share goes up).

a little more from stockman, just a little on IBM last decade or so,
pg464/loc9995-10000:
IBM was not the born-again growth machine trumpeted by the mob of Wall
Street momo traders. It was actually a stock buyback contraption on
steroids. During the five years ending in fiscal 2011, the company spent
a staggering $67 billion repurchasing its own shares, a figure that was
equal to 100 percent of its net income.

pg465/10014-17:
Total shareholder distributions, including dividends, amounted to $82
billion, or 122 percent, of net income over this five-year
period. Likewise, during the last five years IBM spent less on capital
investment than its depreciation and amortization charges, and also
shrank its constant dollar spending for research and development by
nearly 2 percent annually.

... snip ...

my biggest quibble with stockman is that he glosses over the rating
agencies selling triple-A ratings on toxic CDOs (when they knew
they weren't worth triple-A, from congressional Oct2008
hearings). Those triple-A ratings significantly enabled the over $27T
done during the bubble ... and that $27T significantly dwarfs many of
the other issues he cites. reference to over $27T
Evil Wall Street Exports Boomed With 'Fools' Born to Buy Debt
http://www.bloomberg.com/apps/news?pid=newsarchive&refer=home&sid=a0jln3.CSS6c

There is analogy with big data. The scenario use to be possible to
sort data by its value ... and in days of dollars/mbyte ... only have
data that had value at least the cost of disk storage. As disk
capacity drastically increased ... and price radically dropped ... it
was possible to keep enormous amounts of data that had significantly
less value. Much of "big data" is brute force extracting additional
value from this enormous amount of data (sort of the chess depth of
search). 2Tbyte disks are currently going for around $100 (five
cents/gigabyte).

Yesterday, there was thread started in ibm-main mailing list "SAS
Deserting the MF?"
http://www.informationweek.com/windows/showArticle.jhtml?articleID=177103418

SAS High-Performance Analytics software is designed to take advantage
of highly distributed, massively parallel processing (MPP) on
memory-intensive X86 servers. It has been a big strategic push for SAS
over the last two years, as customers demand ever-faster performance.

pioneered a number of performance techniques ... including some that
later morph into capacity planning; snapshot perforance counters every
couple minutes, hot-spot execution analysis, modeling and simulation,
workload profiling, analytical system&workload modeling (done in
APL) etc.

Some of the APL-based analytical system&workload modeling morphs
into the performance predictor available on the (virtual
machine based) world-wide sales&marketing support online HONE
system ... some past posts mentioning HONE
http://www.garlic.com/~lynn/subtopic.html#hone

customer sales/marketing people could enter customer configuration and
workload information and ask "what-if" questions about what happens if
there are changes to hardware configuration and/or workload.

A little over a decade ago, I was involved at datacenter that ran
450+K Cobol statement application every night on 40+ max. configured
IBM mainframes. The number of mainframes were sized, based on being
able to get the all the work done in the overnight batch
window.

The had a group of 100 or so people in performance group for the past
couple decades involved in the care&feeding of this application
... much of it was various kinds of hot-spot execution analysis.

They had brought in somebody that had obtained rights to a descendent
of the performance predictor in the early 90s (in the era that
IBM went in the red and was spinning off bunch of stuff), had ran it
through APL->C converter and was using it as basis for large
datacenter performance consulting business ... and had come up with
possibly 7% performance improvement.

One of the other performance analysis that was done at the science
center was multiple regression analysis of the snapshot performance
data to identity stuff at the macro level (as opposed to the micro
level that lots of the other stuff concentrated on). I offerred to do
multiple regression analysis of the workload activity data. I started
out with freebee off the net ... but they got me a PC-based SAS
license for larger aggregate data analysis. This uncovered a 14%
savings ... it identified a high level function that was accounting
for 21% of processing ... involving enormous amounts of highly
optimized low level code ... but it was being repeated three times for
a specific operation when it should have only been done once. Before I
started, I suggested that I should be compensated based on 5% of the
savings (aka 14% of over billion dollars in IBM mainframe) ... never
happened.

...

more topic drift ... science center had done port of apl\360 to
cp67/cms for cms\apl ... eliminating the apl\360 multitasking and
swamping support ... and adding large memory operation and API to
system services (like being able to do file i/o). This opened things
up to large real-world applications ... including business modeling by
the Armonk business people (which required some security, cambridge
allowed non-employee access to the system ... including students,
staff & professors from various institutions of higher learning in the
cambridge/boston area; and Armonk had loaded the most valuable
corporate asset, detailed customer data).

apl360 allocated new storage location on every assignment statement
... when it reached the end of the workspace ... it would "garbage
collect" ... condensing all in-use storage to continuous area. For
typical apl\360 16kbyte workspace that was swapped as single unit
... it wasn't big impact. cms\apl workspaces were as large as virtual
memory (several mbytes) and demand paged .... even small apl
applications could touch every byte of storage in several megabyte
virtual memory ... resulting in lots of page thrashing. Part of
cms\apl was redoing apl garbage collection to eliminate the enormous
page thrashing hit (basically lots more smaller, more frequent garbage
collection ... rather than waiting until every byte of the workspace
had been touched)

...

This is a Boyd theme from his OODA-loop ... not only is it constant
observe, orient, decide and act ... but it is constantly observing
from every possible facet & viewpoint ... countermeasure to getting
locked into specific mind set.
https://en.wikipedia.org/wiki/OODA_loop

In the case of execution hot-spot analysis is myopic observing from a
specific point of view ... not being able to see the forest for the
trees ...

As I've mentioned before, I use to sponsor Boyd's briefings at IBM
... and even tho John has passed there are regular Boyd get togethers
... including annual sponsored by Marines at Marine Corps Univ. in
Quantico.
http://www.garlic.com/~lynn/subboyd.html

--
virtualization experience starting Jan1968, online at home since Mar1970

PaulGBoulder@AIM.COM (Paul Gilmartin) writes:
"... is killing traditional ..." is a paraphrase of "progress". In the 1950s
you might have heard "The electronic computer is killing traditional
punched card tabulators." Or in the 1960s "The transistor is killing
traditional vacuum tube computers."

This is unpleasant only to those overly invested in the obsolescent
technologies. And not all such technological prognostications become
reality. Magnetic bubbles. Cryogenic computers. Quantum computers.

part of the issue is that the x86 server chip makers are saying they
ship more x86 server chips to cloud operators than they are shipping
to the brand name server vendors ... and those cloud volumes aren't
included in the x86 server market numbers (aka cloud x86 servers are
larger than the total of brand name x86 server "market").

another part of the issue is that the cloud operators for a decade or
so have been claiming they build their own servers for 1/3rd the price
of the same servers from brand name x86 server vendors (contributed to
huge downward pressure on profit margins in the rest of the
market). There is even rumor that some of the brand name vendors have
gotten into this extremely low margin business ... assembling x86
components for cloud operations.

when something becomes larger than what is thought to be the major
market ... then it may be time to pay some attention.

ibm has $1815 base price for e5-2600 blade (a common x86 server found
at cloud operator). it is two 8-core chips for 16 processors and
benchmark of 527BIPS or $3.44/BIPS (compared to 80processor z196 at
50BIPS @$28M or $560,000/BIPS); cloud vendors claims of assembling
servers at 1/3rd the price of brand name vendors would possibly bring
it close to $1/BIPS.

e5-2600v2 with new chip technology due out this year is predicted to
be twice the performance and 12cores/chip ... bringing e5-2600 blade
to over 1TIPS. There is also references to e5-4600 blade in same form
factor with four chips ... which could be well over 2TIPS.

aggregate mainframe sales for the past several years (before the
latest decline) has been approx. equivalent to 180 80-processor z196
and @50BIPS comes out to be about 9TIPS/year ... which is less than
half a rack of e5-2600 blades.

big cloud operators are building new megadatacenters, each with
hundreds of thousands of such blades (and millions of processors).

last year financials had mainframe processors 4% of revenue but total
mainframe business (with software, services, etc) was 25% of revenue
(and 40% of profit). that has mainframe customers paying ibm avg. of
6.25 times their processor purchase for mainframe operations ... i.e.
TCO just to IBM for $28M 80processor, 50BIPS z196 then averages total
of $175M or $3.5M/BIPS (compared to possibly $1/BIPS at cloud
operations, a factor of 3,500,000 times difference).

--
virtualization experience starting Jan1968, online at home since Mar1970

3081 was built out of pieces that were originally intended for Future
System ... as opposed to s/370 (while Amdahl's machine was built to be
s/370) ... it helps explain the poor 3081 performance in relationship to
the enormous number of circuits in the machine (especially compared to
Amdahl's machine)
http://www.jfsowa.com/computer/memo125.htm

3081D ... original 3081D was suppose to have both processors running at
5mips (10mips aggregate) ... but some benchmarks came out 20% slower
than 4.5mip 3033. the size of the processor cache was doubled for 3081K
... which supposedly improved each processor from 5mips to 7mips (but
each 3081d processor was hardly 5mips to begin with).

SAS Deserting the MF?

sipples@SG.IBM.COM (Timothy Sipples) writes:
I must take issue with the "BIPS" measurement and the cross-architecture
comparisons presented in this discussion. They're extremely misleading at
best. "MIPS" and "BIPS" are perilous enough within zEnterprise capacity
estimations, but they go haywire rapidly when carried elsewhere.

industry standard benchmark dhrystone used for MIPS, BIPS, TIPS
... which is not actual instructions/sec ... but number of
iterations/sec scaled to 370/158-3 assumed to be 1MIPS (aka ibm
mainframe is used as industry standard baseline for MIPS, BIPS, &
TIPS)
https://en.wikipedia.org/wiki/Instructions_per_second

one of the claims about x86 performance increase in BIPS over the past
decade has been attributed to competition between multiple vendors
producing x86 chips. the other claim is that for the past several x86
chip generations they've gone to RISC cores with hardware layer that
translate x86 instructions into RISC micro-ops (for decades, RISC
performance has been significantly better than x86, but x86 move to
RISC core appears to be negating that difference).

Everything in LPA is _not_ mapped into every address space.
Just the modules used.

so LPA came up in one of my virtual memory arguments with the POK
favorite son batch operating system groups adding paging support to
os/360 ... initially for OS/VS2 SVS.

they wanted a page replacement algorithm that approximated LRU ... but
they claimed they did some simulation that allowed them to tweak LRU to
improve efficiency ... involving selecting non-changed pages before
changed pages (avoiding needing to do page write overhead prior to
allocating page location for use for reading in replacing page). I told
them that it would completely mess up any LRU characteristics.

well into MVS releases ... it dawned on somebody in POK OS/VS2 land that
they were selecting high-use, shared LPA virtual pages (aka non-changed)
for replacement before non-shared, low-use, private application data
pages (aka changed).

The Facebook Special: How Intel Builds Custom Chips for Giants of the
Web
http://www.wired.com/wiredenterprise/2013/05/facebook-and-intel/
According to Frankovsky, Facebook has long worked with Intel in an
effort to squeeze more performance per watt out of the company's
processors. "Early on, we collaborated with Intel on how many cores
you could put into a package and what the maximum clock rate you could
run those at," he says. This sort of thing, Waxman says, is common
among the large "cloud" companies. Some companies, he says, will even
request chips with a certain number of "cores" per chip, where each
core is basically its own processor, and then ask for a clock speed
tuned to that particular number of cores.

My question is... does anyone know what *data* file to use with this
version of the code??? The comments at the beginning about the
"database" format looks suspiciously like the description of the
regular data file. (Of course, the *code* can be changed without
modifying the comments unfortunately...) Compare the opening comment
of the PL/I version:

Somebody at TYMSHARE had gotten (original?) fortran from stanford sail
pdp10 and moved it to their PDP10 ... and then ported it to their
vm370/cms.

I got a version ... I was waiting for tape next baybunch ... but while
waiting ... somebody that was at ibm location in the uk near a univ
... got a copy of the tymshare cms version at the univ, walked it across
the street to ibm (on internal network) and sent me copy.

I would distribute binary on the internal network ... anybody that
showed that they had got all points, I would send source. Somebody in
STL did Fortran->PLI port. Then there were all sort of add-ons to both
fortran and pli versions for extra points, etc.

None of it made it into any of my archives ... but a few years ago,
somebody sent me a complete cms pli distribution. data file starts
out (converted to ascii):

1
1 You are standing at the end of a road before a small brick building.
1 Around you is a forest. A small stream flows out of the building and
1 down a gully.
2 You have walked up a hill, still in the forest. The road slopes back
2 down the other side of the hill. There is a building in the distance.
3 You are inside a building, a well house for a large spring.
4 You are in a valley in the forest beside a stream tumbling along a
4 rocky bed.
5 You are in open forest, with a deep valley to one side.
6 You are in open forest near both a valley and a road.
7 At your feet all the water of the stream splashes into a 2-inch slit
7 in the rock. Downstream the streambed is bare rock.
8 You are in a 20-foot depression floored with bare dirt. Set into the
8 dirt is a strong steel grate mounted in concrete. A dry streambed
8 leads into the depression.
9 You are in a small chamber beneath a 3x3 steel grate to the surface.
9 A low crawl over cobbles leads inward to the west.
10 You are crawling over cobbles in a low passage. There is a dim light
10 at the east end of the passage.

Note that the Palo Alto Scientific Center also did VM370/CMS APL\CMS
and APL microcode assist for the 370/145.

The Cambridge Scientific Center had done port of apl\360 to cp67/cms
for cms\apl. There is a really long-winded "The cloud is killing
traditional hardware and software" discussion over in "Old Geeks"
http://lnkd.in/mGd4j5

where there is little discussion of cp67/cms and cms\apl

Tektronics tube with IBM logo that plugged into side of 3277. The
3272/3277 still had lots of electronics in the 3277 head ... making
the tektronics hack (3277ga) possible (as well as some number of other
hacks). Change to 3274/3278 moved a lot of electronics back into the
3274 controller (to save on terminal head manufacturing costs). The
change also made response significantly slower and significantly drove
up chatter on coax cable. We complained to the communication product
group about problem for interactive computing, they eventually came
back and said that 3274/3278 design point wasn't interactive computing
but "data entry" (computerized card keypunch entry). Later with PC
terminal emulation ... 3278 emulation card did uploads/downloads 1/3rd
speed of 3277 emulation card (because of significant greater 3278
protocol chatter on the coax).

(direct channel attach) 3272/3277 had hardware response of .086secds,
vm370/cms with .11sec system response resulted in .196 (better than
quarter sec). 3274/3278 was .53sec hardware response ... making
(human) quarter second response impossible (lots of studies from the
period about human factor benefits of quarter second response).

I had lots of systems on the west coast at .11second. However, there
was a research institution on the east coast that claimed it had best
vm370/cms in the world with .2sec system response (for similar
workload and configuration). When the difference was pointed out to
them, they bluffed.

Lots of the 3277ga were driven by APL

...

That is why I specified direct channel attach in above ... which was
the 3272 hardware .086 seconds ... including the controller/channel
... compared to direct channel attach 3274 .53sec (with system
response of .11sec ... yielding .196 and .64 respective seen by human
... during the days when lots was being written about quarter second
human response)

Note in 1980, STL (now renamed silicon valley lab) was bursting at the
seams and they were going to remote 300 people from the IMS group to
offsite bldg ... with "remote" 3270 back into STL datacenter ... they
tested and found the 19.2kbit/sec "remote SNA links" totally
unacceptable ... they were use to vm370/cms direct channel attach
... note that if you were MVS user the whole paradigm is different
since MVS tended to have 1sec (or worse) system response ... which
somewhat dwarfs the 3272/3274 hardware issue .... it was only an issue
for vm370/cms direct channel attach ... which a lot of internal
datacenters were using ... including those developing for MVS
platforms.

In any case, I got sucked into doing the support for HYPERChannel
channel extender ... so they could have channel attached 3270
controllers at the offsite bldg. Even tho there was a T1 (1.5mbit/sec)
link involved ... the 300 in the IMS group didn't notice any
difference. The issue in the HYPERChannel channel extender was that
the whole channel program was downloaded to the remote channel
emulator ... eliminating huge amount of the latency and to/fro with
standard channel protocol ... which more than offset part of the data
transfer was full-duplex T1 for part of the path (i.e. capable of
1.5mbits concurrent in both direction).
http://www.garlic.com/~lynn/subnetwork.html#hsdt

As an aside, this came up recently with FICON ... I had been asked in
1988 to help LLNL get some serial stuff they were using through
standardization process ... which eventually turns into FCS. Later IBM
mainframe channel engineers become involved and defined a layer ontop
of FCS that eventually becomes FICON ... but also enormously cuts
throughput compared to native FCS. Recently a new feature for FICON
was defined that downloads channel program package to the remote end
(zHPF/TCW, cutting some of the enormous back&forth channel protocol
latency ... something I had done nearly 30yrs earlier for HYPERChannel
and was included in the original FCS standarization) ... slightly
bringing FICON a little closer to native FCS thruput.

Recently there was z196 peak i/o benchmark done involving 104 FICONs
getting 2M IOPS. There was also recent announcement of FCS for e5-2600
claiming over 1M IOPS (for single FCS; aka two such FCS could have
higher throughput than 104 FCS with FICON layered on top).

For further topic drift ... HYPERChannel wanted to get rights for my
support and release it to customers. At the time, there were some
people in POK playing with what would eventually get released at ESCON
on ES/9000 more than decade later ... and they got it veto'ed
... because they were afraid if the HYPERChannel support was in the
market place ... it might impact their being able to get "ESCON"
released. However, by the time ESCON was released, we were about to
starting playing with FCS ... making it obsolete.

... we were also heavily involved in Harrier ... for HA
configurations. We then wanted to get Harrier serial standardized so
that it inter-operated as fractional FCS (with either copper or
fiber-optic) ... but things got confused and Harrier gets standardized
as its own proprietary SSA (non-interoperable). Old post on the
subject from long ago and far away
http://www.garlic.com/~lynn/95.html#13

also mentions meeting in Ellison's conference room in early Jan1992
about doing 128-way HA/CMP cluster scaleup by ye1992. Problem was that
end of Jan1992, it gets transferred to Kingston, we are told we can't
work on anything with more than four processors ... and a couple weeks
later it is announced as IBM's supercomputer (for scientific *ONLY*)

--
virtualization experience starting Jan1968, online at home since Mar1970

What Makes code storage management so cool?

Peter Flass <Peter_Flass@Yahoo.com> writes:
Lynne has gone into a lot of detail about the rationale for the common
area. At the time I didn't realize that this was just a kludge, and
thought it was a "feature."

aka pointer passing API; for SVS, all of MVT is in single 16mbyte
virtual address space ... kernel, subsystems, applications, pointer
passing API used for both kernel calls and subsystems calls. SVS->MVS,
each application gets 16mbyte virtual address space ... but to make
kernel calls work, kernel image occupies 8mbytes of every address space.
Problem is that both applications *AND* subsystems now reside in their
own virtual address space ... making it difficult for pointer passing
API to work for application to subsystem calls.

solution started common segment area ... which was another area mapped
into every address space for parameter passing in application to
subsystem calls. problem was that it needed to increase somewhat
proportional to concurrent activity and number of subsystems. Late in
MVS/370 cycle ... common segment (renamed common system area) was
pushing 5-6 1mbyte (segments) ... with kernel area at 8mbyte ... it was
threatening to reduce address space for apps to 2mbyte (or less); out of
16mbyte.

for some additional topic drift ... early 70s, I do a paged-mapped
filesystem for cp67/cms ... along with memory mapping "shared"
executable images (in multiple different virtual address spaces), some
past posts
http://www.garlic.com/~lynn/submain.html#mmap

problem was that lots of CMS relied on OS/360 complilers and assemblers
... which support something called relocatable address constants. These
are addresses embedded in executable images that have to be dynamically
modified by loader when image is brought into storage for execution
(based on the address that the executable is loaded at). This works in
os/360 world where file i/o has to completely fetch the image before it
can be executed ... and modifying storage (originally non-paged real
storage) wasn't issue.

however, it is horrible for memory-mapped operation and needing
dynamic modification on load ... undermines using same, shared image,
concurrently in multiple address spaces. some past posts
http://www.garlic.com/~lynn/submain.html#adcon

tss/360 designed for page-mapped environment had addressed the issue by
separating executable image from address location dependent address
variables ... the location dependent address variables can be
manipulated ... independent of the executable image.

--
virtualization experience starting Jan1968, online at home since Mar1970

Codd wrote relational at sjr/bldg.28 and original relational/sql
implementation was system/r done on vm370 370/145 in
bldg.28. Corporate was all tied up in EAGLE as followon to IMS and
while they were distracted ... was able to do technology transfer to
Endicott to get System/R out as SQL/DS. Later when EAGLE imploded
... there was request about how fast it take to do System/R port to
MVS .... which was eventually released as DB2 for decision support
... not dbms transaction. some past posts mentioning system/r
http://www.garlic.com/~lynn/submain.html#systemr

science center was at 545 tech sq on cambridge side of the charles
... did virtual machines, internal network, gml was invented there in
1969 (decade later morphs in iso standard sgml, another decade mophs
into html at cern) ... bunch of other stuff ... some past posts
http://www.garlic.com/~lynn/subtopic.html#545tech

Late 80s, senior disk engineer got a talk scheduled at the annual,
world-wide, internal communication group conference and opened the
talk with comment that the communication group was going to be
responsible for the demise of the disk division. The issue was that
the communication group had strategic responsibility for everything
that crossed the datacenter walls and were fighting off client/server
and distributed computing trying to protect their dumb terminal
paradigm and terminal emulation install base. The disk division was
seeing data fleeing the datacenter to more distributed computing
friendly platforms with the drop-off in disk sales. The disk division
had come up with several products to address the situation, but they
were constantly being veto'ed by the communication group. This also
contributed significantly to the company going into the red a few
years later. Past posts mentioning dumb terminal emulation (&
communication group defending its dumb terminal paradigm)
http://www.garlic.com/~lynn/subnetwork.html#terminal

When the communication group could not block the release of tcp/ip
... they asserted that they were responsible for the interface
controller ... which then became significantly slower and more
expensive. Standard product got about 44kbytes/sec sustained using
nearly full 3090 processor. I did the enhancement for RFC1044
(hyperchannel) for the next incremental release and in some tuning
tests at Cray Research between 4341 and Cray, got sustained channel
media throughput using nominal amount of 4341 (possibly 500 times
improvement in number bytes moved per instruction executed). past
posts mentioning doing RFC1044 support
http://www.garlic.com/~lynn/subnetwork.html#1044

The initial port of tcp/ip to MVS was done by adding simulation for
some vm370 API. Later, the communication group contracted with
somebody to add tcp/ip support to VTAM. When he initially demonstrated
the support, he was told that everybody knows that LU6.2 is much
faster than TCP/IP (not the other way around) and he would only get
paid for a *correct* TCP/IP implementation.

The 3277 keyboard had bunch of electronics that for the 3278 got moved
back into the 3274 controller (cutting manufacturing cost) ... which
accounted for some of the increase in protocol chatter &
latency. Hacks had been done to the 3277 keyboard electronics to
change the repeat key delay and repeat key rate (which was no longer
possible with 3278 since the electronics were now back into the 3274).

Also, 3270 architecture was half-duplex which was OK for data entry
... but could get really annoying for interactive computing. One of
the issues was if the system was doing a write to the screen the same
moment that a key was pressed, it would lock the keyboard. Somebody
built a small FIFO character box for the 3277, you unplugged the
keyboard from the display head, plugged the FIFO box into the display
head and plugged the keyboard into the FIFO box. If screen was being
written at same moment as incoming keystroke, the FIFO box would queue
the keystrokes until the write was finished ... and then present the
keystrokes. Again, with everything moved back to the 3274 this became
impossible. It also eliminated being able to plug tektronics head into
side of 3278 display.

past post with reference that ANR upload/download rate was three times
DFT (in part because of the significant hardware increased latency for
each operation)
http://www.garlic.com/~lynn/2001m.html#17past post with pieces of 3272/3274 hardware comparison report from
circa 1980 ... (aka 3272 hardware capable of 10-11 simulated
screens/sec compared to possibly 2-3 with 3274; with approx
2kbytes/simulated screen)
http://www.garlic.com/~lynn/2001m.html#19past post with excerpt from late 80s reference to MYTE (terminal
emulation support) getting 70kbytes/sec with PCCA & PCNET compared to
15kbytes/sec with 3274/TCA (getting almost as fast as 3272/ANR)
http://www.garlic.com/~lynn/2005r.html#17

note the MYTE 70kbytes/sec with PCAA&PCNET and 15kbytes/sec with
3274/TCA compares to sustained 1mbyte/sec I got with tcp/ip rfc1044
support between 4341 & cray (limited by the 4341 channel speed).

also I had PC/RT with megapel display in the HYPERChannel (network
systems) booth at Interop '88 ... it was in the central courtyard at
right angles to the sun booth. trivia: sunday night before start of
interop '88 and well into monday morning the floor nets were
repeatedly crashing. resolution came shortly before the show opened
... also resulted in standard specification in rfc1122 ... misc. past
posts mentioning interop '88
http://www.garlic.com/~lynn/subnetwork.html#interop88

from above:
Back in the 1980s, the National Science Foundation created the NSFnet:
a communications network intended to give scientific researchers easy
access to its new supercomputer centers. Very quickly, one smaller
network after another linked in-and the result was the Internet as we
now know it. The scientists whose needs the NSFnet originally served
are barely remembered by the online masses.

... snip ...

originally we were to get $20M to tie together the NSF supercomputer
centers, then congress cuts the budget and a few other things
happened, finally NSF releases an RFP. Internal politics prevents us
from bidding on the RFP. The director of NSF tries to help by writing
the company a letter (copying the CEO) ... but that just makes the
internal politics worse (as does references like what we already have
running is at least five years ahead of all RFP responses).

We already had operational backbone T1 links ... with some links that
are even faster. This is part of reason that NSF RFP specified
T1. However, the winning RFP response ... actually only installed
440kbit/sec links (not 1.5m/sec links). Then to sort of comply with
the letter of the RFP, T1 "trunks" were installed with telco
multiplexors to carry multiple 440kbit/sec links over the T1 trunks.

other internet trivia ... before he died, Postel (internet standards
RFC editor) used to let me do part of STD1

--
virtualization experience starting Jan1968, online at home since Mar1970

What Makes sorting so cool?

jmfbahciv <See.above@aol.com> writes:
If you can change any instruction in a sharable segment, you can affect
every other user who runs that segment of code. Now think about JIT
software which downloads from another site. If it's sharable and got
changed, you're now running software which has not gone through the
usual vetting that build procedures invoke. Compiling, linking and
saveing programs was a form of ensuring that only "good" code got
through.

cp67/cms had been "protecting" shared pages by complicated hack with 360
storage protect keys ... it required violating virtual machine
architecture for virtual machines running with shared pages ... since
the PSW key had to be forced to non-zero and shared pages forced to not
match PSW key (and non-shared pages had to match PSW key).

for morph to vm370/cms ... things were reorganized to take advantage of
the 370 r/o segment protect ... things were running fine on early
(internal) 370/145 (which implemented full 370 virtual memory
architecture) ... still before product release.

then 370/165 was running into scheduling problems retrofitting virtual
memory hardware to the machine. there were escalation meetings about
what could be dropped from 370 virtual memory architecture to gain six
months in the 370/165 hardware schedule. Eventually several features
went on the chopping block ... including segment protect (over loud
objections from the vm370 group). as a result, models that had already
implemented the full 370 virtual memory architecture had to drop back to
the 370/165 subset ... and any products (like vm370/cms) that were using
the dropped features ... had to be redone. as a result, vm370/cms had to
drop back to the cp67/cms shared page hack with storage keys to protect
shared pages.

when i was undergraduate and doing lots of work on cp67, the vendor
would periodically suggest some things for me to do ... i didn't know
about the above referenced guys until much later ... but in retrospect
some of the requests may have originated from that community.

i heard one of the issues for them ... besides the inherit security of
the virtual machine paradigm was that all source was shipped with
product ... and in fact, all maintenance was shipped as source updates.
they could code review every line of code as well as all their
executables they built from scratch (cp67 and then later vm370).

circa 1980, they asked if they could get the *exact* MVS source that
corresponded to all distributed executable pieces. folklore is that the
company spent $5M on taskforce studying the issue ... and eventually
came back and told the agency that it wasn't practical to determine the
exact MVS source that corresponded to any specific distribution.

What Makes a bridge Bizarre?

Dan Espen <despen@verizon.net> writes:
On the contrary, the less you know, the more afraid you are.
Ever point out to someone that there's radiation coming out of a CRT?

There's some big money on the nuke side too.

they aren't scared of nuclear ... they are scared because somebody has
generated a lot of scare statements (and they don't know enuf one way or
another).

Eisenhower in battle with MICC tried to use mutual-assured-destruction
to promote diplomacy (as alternative to conflict).

MICC (especially) tried to counter with requirement for enormous defense
budget for conflict ... get people scared of nuclear and see if you can
transfer to generalized fear ... so enormous big main battle tanks are
funded (even tho they don't play in mutual-assured-destruction).

non-nuclear power interests have been able to leverage such fear also
... having significant amounts of money to outspend other factions (both
nuclear power as well as environmental) ... significantly downplaying
their impact on the environment (causing more health damage than nuclear
ever has).

--
virtualization experience starting Jan1968, online at home since Mar1970

part of the big cloud operators has been publishing open
specifications so that companies doing "white box" assemblies can
build to same specs ... for smaller operations that don't want to have
their own in-house staff doing it. another in that genre

one of the issues is that big cloud operators are viewing much of
their megadatacenters as a cost item (as opposed to profit) .... open
standards help increase the volume and bring down those costs ... more
than offsetting any possible competitive advantage

from above:
Perhaps most importantly, the AWS business offers the promise of much
higher profit margins at a time when Amazon has virtually trained
investors to expect ultralow margins on its traditional online retail
business. That is considered one of the main reasons behind the
company's push to disrupt the cloud computing industry in much the
same way it upended markets for books, consumer retail and even tablet
computing.

Note that IBM used to rent/lease all its computers. Online (virtual
machine based) on-demand service bureaus in the 60s ... one of the
biggest issues dealing with the huge variability in online (on-demand)
use (low to peak) was allowing the cpu meter (used for rent/lease
charges) to come to a stop when the system was idle but still allow
system to come to full operation with incoming activity (normally cpu
meter ran when ever cpu and/or any channel was busy, gimmick was to
have channel program that would pick up incoming characters ... but
wouldn't have the channel "busy" when there were no characters). This
is analogous to big cloud operators getting server chip vendors to
allow power/cooling to drop to zero when chip is idle ... but
instantaneously come up to full operation.

After IBM switched from rent/lease to sales ... several companies got
into the business of buying computers and rent/lease (aka "3rd party"
leasing; different than the online service bureaus). Two major
differences now

1) in the early 70s case, the leasing companies still bought the
computers from IBM (and the online service bureaus were still using
"brand name" mainframes; either directly purchased from IBM or
indirectly from leasing company) ... currently big cloud operators are
buying the chips directly and bypassing the large brand name server
vendors (brand name server vendors are seeing none of the revenue).

2) i don't believe the leasing companies in the 70s were majority of
the market. server chip manufactures are saying they are now shipping
more chips directly to the cloud operators than to the brand name
server vendors ... which doesn't even show up in the "server market"
numbers (various of the cloud operators have been claiming they build
more servers than any one of the brand name server vendors).

In the early 70s ... IBM wasn't as concerned about the 3rd party
leasing companies ... since the computers were still purchased from
IBM ... and they helped with ibm mainframe customer base transitioning
from IBM rent/lease to sales (there is still some amount of 3rd party
leasing going on). The current cloud operators are more equivalent to
customers moving to a completely different vendor platform ... since
there is *NO* revenue flow to the brand name vendors.

Note that 1) lots of services being done on large cloud operations are
for customer facing service operations ... and places like AWS can
provide lower latency to those customers than nearly all but the
largest of in-house operations 2) the large cloud operations have done
significant work on minimizing all kinds of latency.

re: latency; for the fun of it ... I just did x-country ping ... east
coast to west coast and back ... avg. latency (elapsed time) is
.086seconds. traceroute shows 15 hops over (and 15 hops back) through
at least three different internet service providers.

where 3272 direct channel hardware was (also) .086sec and (some highly
optimized) system response was .11seconds for total seen by the user
of .196seconds (today's ping time includes host software time at
remote end). 3274 direct channel attached controller increased the
.086secs to .530 seconds ... so it was impossible to get quarter
second response. lots of us complained bitterly about the 3274 ... but
there were lots of people that didn't ... especially the majority of
TSO users who never saw system response any better than 1sec. (so
hardware degradation from .086sec to .53sec, they wouldn't notice
... and any SNA involvement enormously increased it further).

besides working with various national labs and other institutions on
numeric intensive and scientific ... we were also looking at commercial
scaleup. IBM's mainframe databases weren't portable ... so we were
working with the four primary RDBMS vendors that had products that ran
on both unix and vax/cluster.

I did a cluster global lock manager with API similar to vax/cluster that
made the moving of their vax/cluster support over to unix (already their
implementations had huge amount of commonality). Some of the RDBMS
vendors had strong feeling on how to improve the vax cluster operations
... combined with their suggestions and long background in mainframe
loosely-coupled operation *AND* DBMS ... for example past posts
mentioning original relational/sql
http://www.garlic.com/~lynn/submain.html#systemr

... it was fairly straight-forward to make a lot of
improvements. example is this reference to early jan1992 cluster scaleup
meeting in ellison's conference
http://www.garlic.com/~lynn/95.html#13

so within hrs after the last email in above (end jan1992), the cluster
scaleup is transferred and we are told we can't work on anything with
more than four processors. This is major motivation in deciding to leave
in 1992. This is also when company went into the red ... major
contributing factor was the stranglehold that the communication group
had on datacenters (fighting off client/server & distributed computing,
trying preserve their dumb terminal paradigm and their dumb terminal
emulation install base) ... some past references
http://www.garlic.com/~lynn/subnetwork.html#terminal

we previously worked with some armonk hdqtrs policy people on
positioning ha/cmp against other vendor high available products. after
we left, one of the armonk hdqtrs policy people got tasked trying to
widen the market for mainframes by getting all the non-mainframe RDBMS
vendors to do mainframe ports ... and he would ask us to sit in on the
conference calls (with some of the heads of these RDBMS companies)
... to help translate between IBM/mainframe-speak and open-system
terminology.

A common refrain, declining the offer, was the scaling support and
regression testing for a mainframe 300 disk drive configuration would
cost significantly more than the resulting (incremental) revenue (they
were already in their ROI sweet spot, mainframe support would be
negative ROI ... including they would be competing with incumbent
mainframe DB2)

4300s sold similar numbers as vax/vms into that market ... the big
difference was that there was large corporate orders of several hundred
4300s at a time ... sort of the leading edge of the distributed
computing tsunami. some old 4300 email
http://www.garlic.com/~lynn/lhwemail.html#4341

arpanet had been limited by requirement for permission from central
authority and requirement for IMP. at the time of 1jan1983 great
switch-over to tcp/ip internetworking, there were approximately 100 IMPs
and 255 connected "hosts" ... while the internal network was well on its
way to passing 1000 (in large part because of the explosion in 4300
machines on the internal network). later on, the internet included
workstations&PCs as network nodes ... while they were still restricted
to dumb terminal emulation on the internal network.

In the late 80s, the internal network also fell prey to being forced to
SNA/VTAM ... which also would significantly hurt its ability to
dynamically grow and expand.

ISAM and CKD were 60s technology ... trade-off between (abundant) i/o
resources and limited real storage. it was possible to have the data
structure index layed out on disk and write channel programs that
moved through the structure using multi-track search and/or
self-modifying i/o operations.

cyl/track position was access with BBCCHH and seek and seek-head
channel commands. then it was possible to search a whole cylinder for
specific record characteristics (key, data). Channel command could
read location specific &/or record characteristic information that was
then used by later channel command in the same channel program. This
would consume enormous channel, controller, and disk resources with
minimal processor and real storage requirements.

The trouble was that by the mid-70s, the trade-off was starting to
invert, i/o was becoming scarce resource while processor & real
storage was increasing enormously. I would start writing email & tomes
on the issue. A report I did in the early 80s claimed that disk i/o
relative system throughput had declined by an order over a period of
15yrs (disk i/o had gotten 3-5 times faster, while the rest of the
system got 50times faster). Some disk division executive took
exception and assigned the division performance group to refute my
claims. After a couple weeks, they came back and effectively said that
I had slightly understated the problem. Their analysis was then respun
and turns into SHARE (ibm user group) session on optimizing disk
throughput.

An similar example is that in the 70s, the IMS development group in
STL would argue that System/R (original relational/sql implementation,
DB2 precursor) doubled the disk space requirement (for relational
index, compared to IMS having direct pointers) and increased disk i/o
by factor of 4-5 to get a record (separate I/Os to process the
relational index). By the 80s, cost of disk space had come down
significantly (negating the incremental index space cost) and
processor real storage had increased significantly (caching the
indexes, minimizing the additional index I/O overhead). some past
posts mentioning system/r
http://www.garlic.com/~lynn/submain.html#systemr

The counter-argument was IMS efficiencies significantly increased the
administrative overhead and constrained the flexibility (compared to
RDBMS). In general, hardware cost was coming down and the resources
for care&feeding of IMS were becoming more expensive and scarce (RDBMS
would be able to move into lower-value operations that couldn't
justify the relative increasing IMS related costs).

late 70s, large national retailer with large loosely-coupled operation
... but all sharing same application library PDS with 3cylinder
directory on 3330 ... online stores would dynamically load
applications all out of the same library. Peak load, throughput across
the whole country bogged down to 2applications/second. Issue was
multi-track search of PDS directory took avg. of 1.5cyls ... first
search was full cyl at 1/3second (19 revolutions @ 3600rpm) and the
2nd was half cyl. at 1/6second ... followed by member load
... 1/2second elapsed time (during the multi-track searches, device
and shared channel&controller, were locked) ... aka the maximum
application rate for the aggregate of the large number of stores
across the whole country was 2/sec. some past posts mentioning
CKD, FBA, multi-track search, etc
http://www.garlic.com/~lynn/submain.html#dasd

I had offered MVS FBA support and get off the multi-track paradigm
... but they said that I needed $26M business case (documentation,
training, education, etc) with incremental new disk sales (couldn't
use lifetime savings) ... *AND* people were buying CKD as fast as they
could make it ... so any FBA support would just turn into same amount
of FBA (no incremental new revenue). some past posts getting
to play disk engineer in bldgs. 14&15
http://www.garlic.com/~lynn/subtopic.html#disk

A totally separate thing I did for the IMS group ... besides
consulting on DBMS ... in 1980, STL was bursting at the seems and they
were moving 300 from the IMS group to offsite bldg. They tried "remote
3270" but found it horribly unacceptable. I then got con'ed to do
channel extender support so that local channel attached 3270
controllers could be put at the remote bldg. Part of the support was
downloading the device channel program to a channel emulator at the
remote site. This violated original channel architecture and negated
stuff like ISAM was doing with its fancy channel programs ... but
dramatically improved the I/O throughput (couldn't be used for some of
the more outrageous channel programs). some past posts mentioning
channel extender & other high-performance stuff
http://www.garlic.com/~lynn/subnetwork.html#hsdt

1988, I'm asked to help LLNL standardize some serial stuff they are
doing ... which in the early 90s morphs into fibre channel standard
... included is support for i/o programming downloading ... running
asynchronously ... that dramatically increases i/o throughput. Later
POK channel engineers get involved and define a layer ontop of FCS
that drastically cuts the native throughput that eventually morphs
into FICON (enforcing some of the old mainframe channel architecture
rules). Recent z196 peak I/O benchmark is 2M IOPS with 104 FICON
(layered on top of 104 FCS) ... compared to recently announced native
FCS for e5-2600 claiming over 1M IOPS (i.e. two such FCS would have
higher throughput than 104 FICON). We were using FCS for HA/CMP
cluster scaleup ... some past posts
http://www.garlic.com/~lynn/subtopic.html#hacmp

Note that sometime in the last decade, they added zHPF/TCW support to
FICON ... which comes slightly closer to the native FCS being able to
download channel programs to the remote end ... minimizing a lot of
latency in the standard processing (and similar to what I did in 1980
for channel extender) ... zHPF/TCW claims throughput increase of
(only) 30% (compared to original FICON).

The peak z196 doesn't mention whether it was done with or w/o zHPF/TCW
... if peak z196 was done w/o zHPF/TCW ... then the peak I/O benchmark
with zHPF/TCW might reduce the number of required FICON by 30%
... from 104 to 73 ... still a long way from the two referenced for
native FCS (and it is still possible that the peak z196 i/o benchmark
*WAS* done with zHPF/TCW, in which case it would the 104 FICON
compared to two native FCS).

There are other long-standing compatibility issues preserving all the
mainframe legacy baggage. peak z196 is with maximum number of system
assist processors (14) ... which claims that 100% SAP processor
utilization at 2.2M SSCH/sec ... and recommendation that normal
operation should have SAP processor utilization limited to 70% (aka
1.5M SSCH/sec). A combination of SSCH processor overhead cycles, the
ancient CCW channel programming paradigm, and simulating CKD DASD
(even tho there hasn't been any real CKD DASD manufactured for
decades), exacts heavy penalty on mainframe.

SJR computer science had vm370 370/145 for doing system/r
development. the SJR datacenter had 370/195 running MVT ... there was
use from all over ... in some cases job turn-around could be several
weeks to a couple months. Palo Alto had vm370 370/145 and found that
if they did some checkpointing stuff ... a job that took a few hrs of
370/195 could have several week turn-around ... they would setup to
run offshift on their vm370/145 ... running at 1/30th speed of 370/195
... but still possible to get turn-around faster than sjr/195

SJR adds vm370 370/158 to the datacenter ... and eventually the
370/195 is replaced with 370/168 running MVS ... all the 3330 dasd
strings are physically connected to both processors ... but there is
operational rule that MVS packs will never be mounted on vm370 3330
dasd string. One day it happens and within 5 minutes that there are
huge irate calls from cms users about the response going all
to ... aka the i/o programs to that one mvs pack was
locking up the shared controller needed by vm370 to get to all the cms
data areas. vm370 operators demand that the MVS operators immediately
move the pack ... they decline ... saying they will wait until
offshift.

vm370 group spins up a specially optimized, one pack VS1 system on a
"MVS" drive and start some of its own i/o programming ... bringing MVS
to its knees (significantly improving CMS response) ... optimized VS1
running on a loaded vm370/158 can outperform a MVS/168. The MVS
operators agree to immediately remove the MVS pack from the vm370
string ... if the VS1 pack is moved off the MVS string.

While the 370/195 was still in bldg.28, across the street in bldg.15
... they get an brand-new engineering 3033 for dasd testing. Now I
have vm370 systems running on all the bldg. 14&15 370 systems. They
had previously been doing all their testing stand-along, machines
prescheduled 7x24 around the clock. They had once tried running MVS
hoping to enable multiple concurrent testing ... but found that MVS
had 15min MTBF in that environment. I offer to rewrite the I/O
supervisor to make it bullet proof and never fail, enabling
concurrent, ondemand testing ... significantly improving productivity.

In any case, bldg14&15 are running online service in the spare cycles
left over from concurrent testing of development hardware (testing
taking 1-3% of percent of cpu). The disk engineers are running "air
bearing" simulation on the 370/195 (part of designing thin film heads
for 3380s) ... and even with priority access getting week or two
turn-around for 1-2hr simulation. we get "air bearing" simulation
moved over to bldg15 vm370 3033 ... its only 4.5mip processor compared
to 370/195 10mip ... but it can get multiple 2-4hr turn arounds a day
(compared to week or two they were getting on 370/195).

--
virtualization experience starting Jan1968, online at home since Mar1970

from above:
Gray is known for his groundbreaking work as a programmer, database
expert and Microsoft engineer. Gray's work helped make
possible such technologies as the cash machine, ecommerce, online
ticketing, and deep databases like Google. In 1998, he received the
ACM A.M. Turing Award, the most prestigious honor in computer
science. He was appointed an IEEE Fellow in 1982, and also received
IEEE Charles Babbage Award.

from above:
On January 28, 2007, Jim was lost at sea. At the time he disappeared,
Jim was a Technical Fellow at Microsoft and previously a researcher at
Digital, Tandem, IBM and AT&T. His work on database and transaction
processing systems included RDB, ACMS, NonStop SQL, Pathway, System R,
SQL/DS, DB2, and IMS-Fast Path. He was editor of the Performance
Handbook for Database and Transaction Processing Systems, and
co-author of Transaction Processing: Concepts and Techniques, widely
regarded as the bible of transaction processing

... snip ..

Another Gray story ... before he disappeared, he had con'ed me into
interviewing in Redmond for Chief Security Architect ... the interview
stretched out over a couple of weeks ... but we never came to
agreement on a number of issues.

semi-related to IMS-Fast Path ... long ago and far away, my wife was
con'ed into going to POK to be responsible for loosely-coupled
architecture (aka shared system complexes); while there she developed
peer-coupled shared data architecture. She didn't stay long, in part
because of very little uptake of her architecture (not until SYSPLEX
and Parallel SYSPLEX, except for IMS hot-standby). Another problem
contributing to her not staying long was constant skirmishes with the
communication group, who were trying to force her into using SNA/VTAM
for loosely-coupled coordination. misc. past posts mentioning
peer-coupled shared data architecture
http://www.garlic.com/~lynn/submain.html#shareddata

--
virtualization experience starting Jan1968, online at home since Mar1970

What Makes sorting so cool?

Shmuel (Seymour J.) Metz <spamtrap@library.lspace.org.invalid> writes:
McCarthy tageted a lot more than the arts, e.g., Army, State
Department. In fact, it was his going after the Army that caused
Eisenhower to care.

in fact, Ike leverage his vice president in dealing with McCarthy

Ike's Bluff: President Eisenhower's Secret Battle to Save the World
pg56/loc701-5:
The vice president, Richard Nixon, who had his own history as a Red
baiter and could talk to McCarthy, was dispatched to the
Senate. McCarthy told Nixon he had two speeches prepared about Bohlen,
and had not used "the real dirty one." 18 Dulles called Bohlen back to
his office. The White House would stand by him, he informed Bohlen.

... snip ...

pg133/loc1753-57:
Instead, McCarthy, in his reckless way, made the mistake of attacking
the US Army. For four years, McCarthy had blackened reputations and
ruined careers, hounding government employees, even blameless ones like
Chip Bohlen. Pouncing on a slightly pink army dentist named Dr. Irving
Peress, McCarthy started wailing, "Who promoted Peress?" and began
investigating his commanding officer, General Ralph Zwicker, denouncing
him as "not fit to wear the uniform" and possessing "the brains of a
five year old child."

... snip ...

goes on to talk about how white house then orchestrated McCarthy's
collapse behind the scenes.

--
virtualization experience starting Jan1968, online at home since Mar1970

other silicon valley lab folklore. it was originally going to be
called the coyote lab ... after practice of naming for the closest
post-office. However shortly before it was to open ... there was a
demonstration in washington dc by organization of working ladies from
san francisco known as coyote (spring vacation and had taken the kids
to DC that week, it was also just before the smithsonian air and space
museum opened). there was then quick activity to rename it the santa
teresa lab (after the closest cross street).

i use to walk or ride my bike to sjr ... and would periodically ride
my bike between sjr (bldg. 28 on main plant site ... bldg. no longer
exists) and STL (now silicon valley lab) ... little over 6miles

Truman's Secretary of Defense George Catlett Marshall was the target of
some of McCarthy's most vitriolic rhetoric. Marshall had been Army Chief
of Staff during World War II and was also Truman's former Secretary of
State. Marshall was a highly respected General and statesman, remembered
today as the architect of victory and peace, the latter based on the
Marshall Plan for post-war reconstruction of Europe, for which he was
awarded the Nobel Peace Prize in 1953. McCarthy made a lengthy speech on
Marshall, later published in 1951 as a book titled America's Retreat
From Victory: The Story Of George Catlett Marshall. Marshall had been
involved in American foreign policy with China, and McCarthy charged
that Marshall was directly responsible for the loss of China to
Communism. In the speech McCarthy also implied that Marshall was guilty
of treason;[44] declared that "if Marshall were merely stupid, the laws
of probability would dictate that part of his decisions would serve this
country's interest";[44] and most famously, accused him of being part of
"a conspiracy so immense and an infamy so black as to dwarf any previous
venture in the history of man."[44]

... and ...
This abuse of Zwicker, a battlefield hero of World War II, caused
considerable outrage among the military, newspapers, civilian veterans,
senators of both parties and, probably most dangerously for McCarthy,
President Eisenhower himself.[75] Army Secretary Stevens ordered Zwicker
not to return to McCarthy's hearing for further questioning.

... and ...
Early in 1954, the U.S. Army accused McCarthy and his chief counsel, Roy
Cohn, of improperly pressuring the Army to give favorable treatment to
G. David Schine, a former aide to McCarthy and a friend of Cohn's, who
was then serving in the Army as a private.[77] McCarthy claimed that the
accusation was made in bad faith, in retaliation for his questioning of
Zwicker the previous year.

... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

What Makes a bridge Bizarre?

mausg writes:
Aammonium nitrate is used as explosive in South African mines. In pure
form it is not for sale in Ireland, just as CAN, (Calcium Ammonium Nitrate).
Even unobvious things like flour dust, in the right proportion in the air
can be explosive (Example from Texas, years ago).

off-the-shelf T1 encryptors were really expensive and finding faster
encryptors was really hard (almost forced to run telco multiplexors with
multiple T1 links, each having their individual encryptors). I got
involved in design for much faster encryptor ... that was DES adapted
for packet (aka could resynch on packet boundary) ... would support
1-3mbyte/sec ... and cost less than $100 a board (was also working on
board supporting Reed-Soloman FEC running at similar speed ... fortunate
to have an engineer who had done a lot of the work on Reed-Soloman when
he was Reed's graduate student)

The crypto group in the company then claimed that couldn't make/use the
boards because it severaly compromised DES encryption (mostly because of
provision for being able to resynch on packet boundary). I spent 3months
learning enough of the right words to convince them that rather than
much weaker than standard DES, it was actually much stronger than
standard DES. It was somewhat hollow victory since they then said I
could make as many boards as I wanted to, but there was only one
customer in the world for the boards (address somewhere in maryland).

some old crypto related email from the period ... mentions that software
DES runs at about 150kbytes/sec on 3081k processor ... would need both
processors in 3081k dedicated to handling a single full-duplex T1 line
... also has discussion of proposal to do PGP-like public key
implementation ... a decade before PGP
http://www.garlic.com/~lynn/lhwemail.html#crypto

The Vindication of Barb

Peter Flass <Peter_Flass@Yahoo.com> writes:
Until laser printers came in, more so than dot-matrix printers, you
were limited by the characters your printer could generate. I believe
the 2741 could type 88 different characters. Occasionally people
played games switching typeballs (AFAIK) or printed accented
characters by overprinting, but this was enough of a pain that most
people wouldn't bother unless there was some major requirement. The
laser printer's ability to print anything really opened up the field.

a lot of the 3800 font stuff for script was originally done to support
changing typeballs with 2741 ... as well as pause to change ribbon; some
amount of final copy involved swapping out fabric ribbon and replacing
with fresh film ribbon (as well as changing typeball).

then in 1969, GML was invented at the science center (letters chosen
because they are the first letters of the inventors' last names) ... and
GML processing was added to script. another decade, and GML morphs into
ISO standard SGML. misc. past posts
http://www.garlic.com/~lynn/submain.html#sgml

What Makes code storage management so cool?

Peter Flass <Peter_Flass@Yahoo.com> writes:
My pet peeve is that so much software isn't Rexx-aware. Having used
XEDIT on VM, it seems that a lot of problems could be very easily
solved by creating a good Rexx interface to various pieces of
software.

early days of rexx ... while it was still rex and hadn't yet been
released to customers ... I wanted to demonstrate power of rexx ... and
decided on demonstration to re-implement (vm370) IPCS in rexx. The
original was large thousands of assembler instructions. My objective was
the rewrite would take less than half time over 3month period, have 10
times the function (and with some slight of hand) run 10 times faster.
I got done early and so started doing library of automated analysis that
examined storage for various kinds of common failure modes.

it eventually came to be used by nearly all the datacenters and most of
the customer support PSRs. I thot that IBM would also release it as
replacement for standard product (and since it was during the middle of
the OCC-wars, it would be released with full source) ... but for
whatever reason that never happened. I did get permission to give talks
at user group meetings at various user group conferences ... and shortly
afterwards ... other implementations also started appearing.

as an aside ... the service processor for the 3090 started out being a
highly customimozed version of vm370/cms running on 4331. later the 3090
service processor morphed into a pair of 4361s (3092). since a old,
unsupported vm370 release 6 was being used ... the 3092 group had to
provide their own support and relied heavily on dumprx. old email
referencing the 3092 wanting to ship dumprx as part of 3092 and
make it available in the field that way:
http://www.garlic.com/~lynn/2010e.html#email861031http://www.garlic.com/~lynn/2010e.html#email861223

--
virtualization experience starting Jan1968, online at home since Mar1970

What Makes code storage management so cool?

jmfbahciv <See.above@aol.com> writes:
I consider general DP as managing data bases. IBM refused to
listen to customers who wanted heterogeneous communications;
IBM insisted on homogeneous networks. DEC was willing to
talk to any gear which could send and/or receive signals. That's
why DEC got ahead in 1970 and behond.

it was not only homogeneous communication ... but specifically vtam/sna
with its paradigm of controller for dumb terminals ... and later dumb
terminal emulation (and protecting the dumb terminal emulation install
base).

i've mentioned several times in the past ... trying to get 2702 terminal
controller to do something for cp67 ... that it wouldn't quite do
... big motivation for univ. starting clone controller project with
interdata/3; subsequently four of us get written up as being responsible
for some part of the clone controller business .. some past posts
http://www.garlic.com/~lynn/subtopic.html#360pcm

rolling forward to the late 80s ... A senior disk engineer gets talk
scheduled at internal, world-wide, annual communication group conference
and opens the talk with statement that the communication group was going
to be responsible for the demise of the disk division. the issue was
that the disk division was starting to see downturn in disk sales with
data fleeing the datacenter to more distributed computing friendly
platforms. The issue was that the communication group had strategic
"ownership" of everything that crossed datacenter walls, and everything
the disk division came up with to address the problems, the
communication group veto'ed (part of their attempts to protect their
dumb terminal install base and fighting off client/server and
distributed computing). This significantly contributed to the decline of
the company and going into the red in the early 90s. misc. past posts
http://www.garlic.com/~lynn/subnetwork.html#terminal

What Makes sorting so cool?

Shmuel (Seymour J.) Metz <spamtrap@library.lspace.org.invalid> writes:
Or who want to hear a different religion. How many fundamentalist
Baptist would be happy if their children were forced to pray to Mary?
How many Catholics would care for a nice traditional denunciation of
"The Whore of Babylon"? One man's religion is another man's blasphemy;
people should make their own choices and not have them forced on them
or be forced to fund them.

There's also the question of whether entangling church and state
inevitably corrupts the religion; there is strong evidence that it
does.

there are recent news items about new directive reminding everybody
about not pushing specific religions in the military ... air force
academy seems to have had quite a bit of the problem.

my wife's father graduates from west post in the 30s and then gets
graduate degree from Berkeley. in the 40s, he is command of 53rd armored
engineer battalion ("thundering herd", 8th armored division). I've
scanned some amount of his stuff, includes division booklet put out when
they were at camp polk ... before going overseas ... includes
picture/page of men attending religious service.

On 28 Apr we were put in D/S of the 13th Armd and 80th Inf Divs and G/S
Corps Opns. The night of the 28-29 April we cross the DANUBE River and
the next day we set-up our OP in SCHLOSS PUCHHOF (vic PUCHOFF); an
extensive structure remarkable for the depth of its carpets, the height
of its rooms, the profusion of its game, the superiority of its plumbing
and the fact that it had been owned by the original financial backer of
the NAZIS, Fritz Thyssen. Herr Thyssen was not at home.

Forward from the DANUBE the enemy had been very active, and an intact
bridge was never seen except by air reconnaissance. Maintenance of roads
and bypasses went on and 29 April we began constructing 835' of M-2 Tdwy
Br, plus a plank road approach over the ISAR River at
PLATTLING. Construction was completed at 1900 on the 30th. For the month
of April we had suffered no casualties of any kind and Die
Gotterdamerung was falling, the last days of the once mighty WHERMACHT.

... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

It wasn't an either/or situation with IBM. They offered both
big RDBMSes _and_ card-oriented batch processing - and not all
sites did both.

I was jealous of DEC's ease of interfacing almost any sort of
communications equipment. Mainframe communication protocols and
sofware were a nightmare of complexity; they were designed for
sending files across the continent, not messages across the room.

i got involved to turn it out as an official ibm product ... first on
series/1 and then ported to rios (risc/801) ... all the feature/function
was starting to bump up to some series/1 limitations (but was still
significantly better than "real" 37x5 boxes). I thot I had brick wall
regarding all the internal politics and funding issues ... isolated
project from communication group efforts ... but they managed to get it
killed ... in a way that can only be described as truth is stranger than
fiction (I had already lined up orders from the largest 37x5 customers
... first yr revenue totally covered all costs of turning out as
product).

and recent long winded discussion over in (open linkedin) "Old Geeks" on
IBM dbms stuff:
http://lnkd.in/n9V_df

regulation,bridges,streams

Morten Reistad <first@last.name> writes:
Which has lead to the requirement from Visa/MC et.al that the
card is never to leave your sight. They bring the terminals to
the tables instead. Having the card out-of-sight now qualifies
for a replacement card, probably invoiced to the store that
did it.

early 90s (20yrs ago), there were cases of waiters with their own
skimmers inside their jacket ... with recorders ... at end of their
shift, they would upload over internet to accomplices on the other side
of the world, who would make counterfeit cards and be out on the street
using them within a couple hrs of your visit to the restaurant. however,
this was small scale stuff ... on the order of couple dozen or so a
night ... not the millions or tens of millions that you get in a some of
the breaches (but since then there have been a lot more exploits where
the skimming is being done by standard terminal that has been
compromised).

again 20yrs ago ... there was criminal enterprise that compromised an
ATM cash machine manufacturing company ... installing recording devices
that could be interrogated wirelessly. they recorded information of tens
of thousands of ATM cards for making counterfeit cards. they carefully
performed the fraudulent transactions attempting to obfuscated the
source of the compromise (since law enforcement would confiscate and
shutdown the machines, and invalidate account numbers for all cards used
at those machines). law enforcement noticed suspicion pattern of
fraudulent transactions (with counterfeit cards) at ATM hundreds of
miles away ... and setup stake-out ... and caught some of the low-level
individuals. within hr or two of when the individuals were arrained in
court, the pattern of fraudulent transactions changed (law enforcement
complained that their lawyers mush have notified the rest of the group).
It took longer to trace it back to specific compromised ATM machines ...
and supposedly never did find all of the ATM machines that had been
compromised.

some of the characteristics are reminiscent of the recent data breach
resulting in $45M haul from ATM machines.

Turns out the mechanics of using the internet to distribute the
information for making counterfeit cards ... and the sophistication of
the pattern of fraudulent transactions (to minimize source of compromise
and shutting off all accounts from the same source) ... isn't new, its
(at least) 20yrs old.

as an aside ... type of exploit was looked at by the x9a10 financial
standard working group in the mid-90s ... for which countermeasures were
developed (x9a10 had been given the requirement to preserve the
integrity of the financial infrastructure for *ALL* retail payments).

--
virtualization experience starting Jan1968, online at home since Mar1970

from above:
By the time the astonishing heist was under way, the difficult work of
hacking prepaid debit card accounts and stripping the withdrawal limits
was long done. After that, coding the magnetic stripes on the backs of
plastic cards with the hacked account numbers was no big deal. Brooklyn
U.S. Attorney Loretta Lynch said conspirators in the global scheme,
which netted $45 million from ATMs around the world, were able to use
gift cards, old hotel keys, expired credit cards -- anything with a
magnetic stripe on the back.

... snip ...

aka once they had the information ... then they could use just
about anything with magstripe on it.

about it being trivial to create YES CARD using information from
compromised terminal.

problem was that there was large pilot deployment in the US about that
time ... and after this came out ... all evidence of the pilot appeared
to evaporate w/o a trace. It seems to contribute to reluctance of
repeating such a deployment.

about the same time, law enforcement at "ATM Task Force" meeting gave a
much more detailed description of YES CARD ... and somebody in the
audience made the observation that they managed to spend billions of
dollars to prove that chips were less secure than magstripe (it turns
out that a counterfeit YES CARD could continue to be used long after
the account had been deactivated). some past posts
http://www.garlic.com/~lynn/subintegrity.html#yescard

in some countries where it has been deployed, the associations have
helped motivate merchants to install the necessary equipment by reducing
their costs ... they manage to convince the government that when such
chipscards are used in transactions ... the burden of proof in disputes
is reversed (instead of the merchant/bank having to prove the individual
actually did the transaction, the individual has to prove they didn't do
it).

a couple years ago, I was contacted by an legal representative of an
individual involved in such a case in the UK. They were disputing they
performed the ATM transactions ... however it was up to the individual
to produce the ATM surveillance video showing they hadn't done it
... and the bank was saying they couldn't find the video (in the US the
bank would have to produce the surveillance video to prove the
individual had done it).

--
virtualization experience starting Jan1968, online at home since Mar1970

univ. had 709/1401 combination, 1401 did
tape<->card-reader,printer,punch ... and tapes manually moved between
1401&709 ... where jobs ran tape->tape under 709 ibsys; student
fortran jobs ran around one second elapsed time.

ibm sold univ. a 360/67 (to replace 709/1401) for tss/360 ... but with
all the tss/360 problems, it spent nearly all the time as 360/65
running os/360. student fortran moved to (3-step) fortgclg taking well
over a minute. eventually HASP is installed ... but student fortran is
still over half minute.

starting with MFT-11, I start doing carefully crafted stage-2
sysgens. sysgen is assemble of stage-1 40-60cards ... which punches a
box of cards for stage-2, mostly iehmove/iebcopy statements to
populate PDS members on fresh disk packs. I rearrange all the cards in
stage2 sysgen to make sure linklib and svclib are closest files to
vtoc (minimize arm seek) and move/copy of highest used members are
done first .... places highest used members closest to the front of
the file & the PDS directory (minimize arm seek) as well as at front
of the PDS directory (minimize multi-track search of PDS directory for
member load). I get elapsed time for student fortran down under 13
seconds, almost three times better than "vanilla" system generation.
Part of the issue is student 3-step fortgclg is almost all job
schedular repeated 3times ... and a great deal of job scheduler is
open/close which involves dragging a whole slow of 2kbyte modules from
svclib thru the svc transient area (and repeatedly searching svclib
pds directory for the same modules an enormous number of times).

jan1968, (virtual machine) cp67 is installed at the univ ... but still
only useable for me to play with on the weekends, most of the time the
machine still runs as 360/65. during spring and summer of 1968 I
rewrite substantial portions of cp67, significantly cutting path
lengths. I/O queuing is FIFO; I change to ordered-seek queuing (about
doubling thruput of each disk for normal workloads). I also add
ordered chained requests for page requests. there was a 2303
fixed-head drum ... with head per track ... that was about 4mbytes and
transferred at 300kbyte/sec. univ. had a 2301 fixed-head drum that was
effectively the same but treated four physical tracks as single
logical track, transferring data from four heads in parallel at
1.2mbytes/sec; it had 1/4th as many tracks, but each track was 4times
larger. cp67 did FIFO single 4k page at a time ... having
avg. rotational delay for every transfer ... peaked out at about
80/sec. I took complete queue in single channel program ordered for
optimal rotational transfer and could get very close to 270/sec (aka
full 1.2mbyte/sec).

most of it is about significant reduction in CP/67 pathlength running
MFT14 in virtual machine ... but there is also some about the custom
os/360 stage2 sysgens.

note that it wasn't until univ. installed Waterloo's WATFOR that
student job workload finally got better than 709 throughput; WATFOR
was single job step that batched compile&execute for multiple student
jobs in single operation.

the univ. library had gotten a ONR grant to do an online catalog and
part of the money went to 2321 datacell. the effort was also selected
to be one of the betatest sites for the original CICS product ... and
I got tasked with supporting/debugging the deployment. Turns out there
were some number of bugs because of some CICS code that had expected a
specific combination of BDAM options ... and the library was using a
slightly different set of BDAM options. misc. past posts mentioning
CICS &/or BDAM
http://www.garlic.com/~lynn/submain.html#cics

Part of CICS throughput was it became its own sub-operating system
... it would get big block of resources at startup as well as open all
needed files ... and then it would do all necessary operations
... depending on as few OS/360 operations as possible. This became a
limitation later on because it couldn't take advantage of things like
multiprocessor operation ... I've seen datacenters run over 100
concurrent CICS instances on a single system ... in order to utilize
the resources available. After a long wait, CICS eventually gets
multiprocessor support. Lots of CICS history here ... gone 404 but
lives on at wayback machine:
http://web.archive.org/web/20050409124902/www.yelavich.com/cicshist.htm

from above:
In the early days of CICS (1968-1980) CICS Development determined that
hardware and operating systems of the period were limited in terms of
the cycles and storage needed to support high volume, online, real
time transaction processing. CICS devised its own facilities for task
dispatching, storage management and program management which
demonstrated greater efficiencies than using native operating system
facilities for similar functions.

... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

The Vindication of Barb

Jaimie Vandenbergh <jaimie@sometimes.sessile.org> writes:
Perhaps early on, but in my earlish days with PC-class machines there
was all sorts of fuss. I recall dissemination of stronger crypto
software being problematic in the early 90s, with short breakable
length keys being okay (56 bit rings a bell) but longer ones being
forbidden. Lotus Notes was a poster child of this, and I'm sure HTTPS
was affected too amongst other things.

and end of jan1992, after scaleup is transferred, we decide to leave.
two other people mentioned at the same meeting also leave and join small
client/server startup where they are responsible for something called
commerce server. We are brought in as consultants because they want to
do payment transactions on the server; the startup has also invented
this technology called "SSL" they want to use; the result is now
frequently called "electronic commerce".

as part of the effort, we have to map out SSL technology for payment
business process ... we also have to do some end-to-end walk-thrus
... including the operations calling themselves "certification
authorities" ... that are selling SSL digital certificates. past
posts discussing SSL digital certificates
http://www.garlic.com/~lynn/subpubkey.html#sslcert

we also come up with several requirements for the deployment and use
of SSL ... some of which are almost immediately violated and
contributed to various of the exploits that continue to this day.

there are also lots of industry meetings about what level/strength
encryption that can be used ... and we also get sucked into the industry
"key escrow" meetings (i.e. stronger encryption can be used if the keys
are registered and available to gov. agencies and legal authorities).

as a result of having done "electronic commerce" ... in the mid-90s, we
get invited to participate in the x9a10 financial standards working
group ... which has been given the requirement to preserve the integrity
of the financial infrastructure for all retail payments. we do
end-to-end threat and vulnerability studies and eventually come up with
the x9.59 financial transaction standard ... some references:
http://www.garlic.com/~lynn/x959.html#x959

in part because of being involved with past issues related to encryption
... and in part because the enormous numbers of business processes that
require financial transaction information ... x9.59 solution is to
slightly tweak the current paradigm ... and eliminate the requirement to
hide (and/or encrypt) the financial information ... purely relying on
public key technology for strong authentication (but eliminating any
requirement for encryption for information hiding). x9.59 doesn't
eliminate skimming and/or breach vulnerabilities ... but it eliminates
the ability for crooks to use the information for fraudulent financial
transactions (and therefor any risk related to such skimming and/or
breaches). Now since the largest use of SSL in the world today ... is
this earlier stuff for "electronic commerce" and hiding payment
transactions information, X9.59 no longer requires the information to be
hidden ... and therefor eliminates the major use of SSL in the world
today. past posts mentioning x9.59
http://www.garlic.com/~lynn/subpubkey.html#x959

--
virtualization experience starting Jan1968, online at home since Mar1970

also have 3090VF ... but nothing close to handling 40mbyte/sec disk
arrays. Note also, high-end arrays were 40disks, 32data+8parity,
@3mbyte/sec per disk, comes to 96mbyte/sec transfer

LLNL is major instigator for getting Cray 100mbyte/sec half-duplex channel (sort of like ibm channel but 30+ times faster) standardized as HIPPI.
https://en.wikipedia.org/wiki/HIPPI

To really sell into this market, 3090 also needs to support HIPPI. The
only thing fast enough in the 3090 to handle HIPPI is the extended
store bus ... but there is no i/o programming interface, just 4k copy
to/from instructions. Part of extended-store address space is carved
out for HIPPI interface, that uses PC-like peek/poke paradigm for
getting data to/from HIPPI (but using 4kbyte expanded store move
instructions to do the peek/poke).
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/D.1.8?SHELF=EZ2HW125&DT=19970613131822

About the same time, LLNL starts pushing for standardizing its serial
technology (in 1988 I'm asked if I can help them) ... eventually comes
out as FCS with 1gbit/sec full-duplex (2gbit/sec aggregate).
https://en.wikipedia.org/wiki/Fibre_Channel

Some of the engineers also had taken the original technology (that
eventually is released as ESCON) and makes it full duplex and about
10% faster along with using much less expensive drivers
(i.e. 220mbit/sec full-duplex, 440mbit/sec aggregate) ... which is
also available on RS/6000 as "SLA".

this is similar ... but different to 2301 fixed-head drum which
transferred four heads/tracks in parallel ... single track was
300kbyte/sec, four in parallel was 1.2mbytes/sec, mentioned in this
recent Old Geeks "Old data storage or data base" discussion
http://www.garlic.com/~lynn/2013g.html#39

--
virtualization experience starting Jan1968, online at home since Mar1970

military secrets (the encrypted information) are suppose to be protected
for 30yrs or something (claim is that hollywood wants 50yrs DRM
protection, longer/more protection than military secrets).

much of financial crypto is more like authentication ... having
knowledge of the key can enable being able to perform new fraudulent
transactions ... changing key and/or account number is countermeasure

attacker gets encrypted transaction tries all possible keys to decrypt
the message ... when finally reconizable message is found, they now have
the encrypting ... in standard single key scenario they could inject
fraudulent financial transactions in the network. however, in dukpt
... all they have is the key for that specific transaction and the
contents of that one transaction. every transaction and every key
requires its own brute force attack.

in dukpt scenario ... brute force attack only needs to take somewhat
longer than lifetime of the transaction. in military secrets scenario
... brute force attack needs to take longer than the required lifetime
of the encrypted information.

if brute force DUKPT DES 56bit can be done in 24hrs, then 30yr needs
another 14 bits ... if brute force DUKPT DES 56bit can be done in 1hr
then 30yr needs another 18bits. Add several more bits assuming hardware
gets faster (in some predictable manner) ... during the 30yr period.
Say after 15yrs, the key still has to protect for additional 15yrs
... but the hardware might be 1000 times faster (10bits) or million
times faster (20bits)
https://en.wikipedia.org/wiki/Brute-force_attack

The other side ... is that computer and public facing services may
have little to do with the primary corporate business ... outsourcing
to cloud can have it done much better than "in-house" skills and free
up resources to concentrate on primary corporate objectives. Can
corporate ROI be larger with the resources&skills applied to
mainstream corporate business.

After the IBM big downturn going into the red in the early 90s,
Gerstner "resurrected" the company (and reversed the effort that had
reorganized the company into the "baby blues" in preparation for
breaking up the company) by redirecting company focus on services and
outsourcing

After big decline in IBM mainframe market during the 90s .... IBM
mainframe market for the past decade has remained relatively stable
with little competition. This possibly accounts for z196 processor
price per BIPS at $560,000 and total mainframe revenue is 6.25 times
mainframe processor revenue (avgs $3.5M/BIPS) ... and 40% of total
profit (while mainframe processor is only 4% of total revenue).

The non-mainframe server has increasing competition and the cloud
operators now account for more non-mainframe servers than the
non-cloud market. The public cloud also has lots of competition. The
claims are that competition has been significant factor in the big
improvement in non-mainframe server features, performance, and
price/performance (e5-2600 at 527BIPS and possibly around $1/BIPS at
cloud operations compared to z196 at 50BIPS and $3.5/BIPS).

Competition and dramatic improvement in price/performance has
contributed to cloud focus power&cooling costs in their large
megadatacenters ... at possibly $1/BIPS ... pwoer&cooling has become a
significant large percentage of total cost of operation ... (compared
to ibm mainframe at $3.5M/BIPS ... mainframe costs would have to come
down by a factor of million times for power&cooling to become a
similar percentage of operational costs).

The next prediction is possible move to ARM from x86 for large cloud
operations ... individual ARMs aren't as powerful as high-end x86
servers ... but their cost/BIPS is lower. They are also smaller ... so
it is possible to have equivalent amount of processing power in same
floor space at lower cost/BIPS. A major issue in this non-mainframe
server area is that little or none of the hardware characteristics
bleed through to the application level; it is one of the reasons for
being able to demo large cloud applications on mainframe platforms
(although there is major difference between showing functional
operational and being able to close the price/performance gap between
$3.5M/BIPS and $1/BIPS).

At around $1/BIPS, it may be true that the price/BIPS decline in this
market starts to level off ... although the size of the cloud market
can be powerful motivation for new innovation (competition and size of
the cloud market has driven lots of the x86 server chip innovation
over the last decade). One of the approaches has been to get 100
simpler x86 cores on a chip rather than 8-12 increasingly complex x86
cores. The simpler x86 cores are more comparable to ARM ... so it may
still be possible to get another factor of ten drop in price/BIPS.

The counter expectation is the buzz about migration off PCs and
laptops to smartphones and tablets ... which are increasingly tightly
integrated cloud devices ... which will provide for increasing cloud
demand. It is still a long way from the payup for mainframe at
$3.5M/BIPS.

as FS was failing ... and the communication group was in the process
of formulating SNA/VTAM (including enormous VTAM/NCP interface
complexity as possible countermeasure to clone controlleres)
... my wife was co-author of "peer-to-peer networking" (AWP39
in IBM internal documentation ... aka "architecture white paper"
... for example AWP164 was the specification for APPN). The
communication group viewed anything (internal or external) that wasn't
SNA/VTAM as competition (also communication group had so co-opt the
term "networking" to apply to dumb terminal communication, it was
necessary to include the "peer-to-peer" qualification).

she didn't remain long ... in part because of poor uptake (except for
IMS hot-standby ... wasn't until SYSPLEX and parallel SYSPLEX) ... but
also frequent skimishes with the communication group attempting to
force her into using SNA/VTAM for loosely-coupled system coordination.

above references mainframe channel in comparison with HIPPI, SLA, FCS.
IBM was half-duplex at 3mbyte/sec compared to HIPPI at 100mbyte/sec.
email from long ago and far awsy, about the kingston engineering &
scientific lab.

re: paging costs; i don't know of any real studies of the nature you
give as examples. other factors need to be taken into consideration ---
370 has very poor I/O interface to paging DASD ... as a result
bottlenecks can quickly occur with the whole system being throttled by
trying to shove all those pages thru a very small pipe to DASD, the
resulting economics may show the whole system configuration waiting on
DASD (i.e. DASD can be viewed as a system service, rather than a system
cost ... if it helps the overall system to perform better, some people
are viewing IS centers purely as costs ... and want to cut both
hardware & people from IS ... even tho IS provides a service that
supposedly allows other people work more efficiently).

In general, more memory up to some bus length restrictions is probably
cheaper for pages than DASD I/O ... but a large part of that is the bad
370 channel I/O architectue (both hardware & software) bottleneck ---
rather than just pure component costs.

Engineering&science center in Kingston has something like 20 FPS
machines, a 3090 (w/vector) and a 4381. The FPS machines provide
something like 1.5gflop aggregate crunch power. They were looking to
see how 3090 could be intergrated into the configuration. One thot was
as a DASD front-end and application scheduler. The 3090 has 128mbytes
of memory and lots of 3mbyte channels for DASD I/O. However, the FPS
boxes have memory boards @ 512mbyte (max. of 4 boards per machine,
2gbytes) and local DASD with a 40mbyte data transfer rate.

Using the 3090 as DASD front-end would require dragging data off DASD at
the terrible slow rate of 3mbyte/sec ... and then transfering a 2nd
time @ 3mbyte/sec over another channel to the FPS box (effective rate of
1.5mbyte/sec ... unless some fancy overlapping was done). The FPS box
can get 512mbyte of data off its local dasd in a little over 12 seconds
while the best that could be (reasonably) done w/3090 would take 170
seconds.

One might consider writing programming support to use the FPS boxes as
electronic paging stores ... and/or intelligent paging caches.
... snip ... top of post, old email index

--
virtualization experience starting Jan1968, online at home since Mar1970

the megadatacenters are running hundreds of thousands of systems
(chips) and millions (and tens of millions) of processors
(cores). whether there is 10cores/chip or 100cores/chip in such a
configuration can be the difference between 500,000 blades and 50,000
blades. Some amount of current generation x86 chip complexity and
size are features that aren't heavily used in cloud environment. So
individual simpler core may not have significantly less throughput
than the larger more complex cores.

left and started doing lots of consulting for large chip houses in
silicon valley. One of his major customers was running (at the time)
vm370 on 3081. My former co-worker had taken the mainframe AT&T C
language port ... fixed lots of bugs and significantly improved
generated code performance ... and was using it to port lots of
industry chip tools to CMS.

The company also had lots of SGI high-end graphic workstations
supporting chip design ... and he was in the process of adding
ethernet support to interconnect the 3081 and the SGI machines. The
local rep stopped by and asked him what he was doing ... and he
explained putting in ethernet support. The local rep suggested that he
should instead be doing Token-Ring ... or otherwise the company might
find that the 3081 support and maintenance could suffer. I
immediately got a call and had to listen for an hour as he used
4-letter words to describe the company. Next morning the engineering
EVP called a press conference to announce they were moving off the
3081 to ten sun servers.

for other topic drift ... post in (long widened, linkedin closed "IBM
Historic Computing") discussion about by the time ESCON (17mbyte/sec)
was released with es/9000 in the early 90s ... it was already obsolete
http://www.garlic.com/~lynn/2013g.html#41

in 1980, when I did the channel extender support (for the IMS group)
and the vendor wanted to release the support to customers, the
engineers in POK (playing with the technology that eventually released
as ESCON more than decade later) ... get it vetoed ... because they
were worried that if it was in the market place ... it would make it
more difficult for them to get ESCON out the door.

from above:
The Lotus Development Corporation was founded by Mitchell Kapor, a
friend of the developers of VisiCalc. 1-2-3 was originally written by
Jonathan Sachs, who had written two spreadsheet programs previously
while working at Concentric Data Systems, Inc.

... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

Around here we have municipal roads, county roads, toll roads and
interstate.

there has been lots of stuff about the states having diverted road use
taxes from highway trust funds to the general fund and used for other
purposes. then when it comes time to do road repair and other stuff
... then they have lots of publicity about having to raise taxes for
road work.

In 1988, I had been asked to help LLNL standardized FCS. In 1990, I was
also pulled into SCI (scalable coherent interface, another fiber-optic
serial standard, was being pushed for higher I/O than FCS as well as
memory bus operation) ... being pushed by Gustavson out of SLAC. SCI
then shows up in memory bus for Convex (HP risc), SGI (MIPS risc),
Sequent (i486), and Data General (i486). Later we do some consulting at
Convex, SGI, and Sequent (later HP buys Convex and IBM buys Sequent)

much of my career I was told that I had no career and couldn't expect
promotions, also that top technical positions were quite political in the
corporation and I had managed to offend quite a number of executives.
In the earlier 80s, I was told that they refused to make me IBM Fellow,
but then some of the Fellows provided funding and project support
behind the scenes; I was even included in discussions about creation
of the STSM level. some of this was explained as the significant IBM
corporate culture change to make no waves and sycophancy that
occurred as FS was failing
http://www.garlic.com/~lynn/submain.html#futuresys

from long ago and far away ... part of departing "goodby" message (my
ftp/anon had shadow of all sorts of things, lots of standards meetings
and notes as well as internet RFCs, drafts and other documents), "DSD"
refers to the mainframe division:

from above:
The M4 generation of IBM System x servers, featuring Intel E5-2600
processors, brings compelling benefits to datacenters, even where
existing servers are a just few generations old. Processor technology
has progressed rapidly in the last five years. Not only has the
performance of processors improved dramatically, but energy efficiency
has also made a sharp leap as well. What this means is that IT
managers should reassess all of the servers they currently have in
their datacenters. Older servers may be paid for, but they can also be
quietly costing money through excessive energy usage, management
complexity, downtime, and server sprawl. Don't let your precious IT
dollars slip away.

... snip ...

One of the things sustaining the mainframe market is the enormous
amount of activity related to financial settlement in the overnight
batch window ... much of which dates back to the 60s ... and is mostly
obsolete. In the 90s, there was billions spent on parallelization on
killer macros for straight through settlement (issue was that
globalization was increasing workload that could be done in the
overnight batch window ... as well as shortening the length of the
window); aka real-time transaction runs to completion rather than
being queued for final completion in overnight batch settlement.

Unfortunately they were using some off-the-shelf technology that
increased overhead by a factor of 100 compared to cobol batch
... totally swamping any anticipated throughput improvements with the
killer micros ... and the efforts went down in flames. They hadn't
bothered to do any speeds&feeds calculations ... and even when
they were provided with the calculations ... they chose to ignore
them.

Since that time there has been enormous amounts of throughput work on
parallel RDBMS operations on non-mainframe platforms (100 systems or
more) by all the RDBMS vendors (including IBM). Straight-through
processing implemented on such platforms easily handle the current
workload as well as any significant increases. Recent demos of such
implementations were met with comments that many of the institutions
still have executives bearing deep scars from the earlier efforts in
the 90s ... and they are now extremely risk adverse (and such
institutions wouldn't be trying again until new generation of
executives come on board). This financial market segment can handle
the million times mainframe processing cost multiplier (compared to
technologies like e5-2600) before attempting a repeat of the 90s.

Story is Eisenhower goodby speech was warning about MICC ... but
he then shortens to military-industrial complex. Equivalent is
PRCC ... pharmaceutical-regulatory-congressional complex ...
recent long-winded item on PRCC from yesterday
http://features.blogs.fortune.cnn.com/2013/05/15/ranbaxy-fraud-lipitor/

CBO in 2010 had report that (mostly after congress allowed fiscal
responsibility act to expire in 2002, requiring spending match tax
revenue, partd was first major spending legislation after allows fiscal
responsibility act to expire), tax revenue was cut by $6T (compared to
baseline which had all federal debt retired in 2010) and spending
increased by $6T (compared to baseline) ... for a $12T budget
gap. Another report had DOD budget increase was a little over $2T of
that $6T increase ... A little over $1T for the two wars and another
$1T+ couldn't show what it was used for (for years DOD has been given
exemption to law requiring all federal agencies be able to pass a
financial audit ... and when, if ever, DOD might be able to pass a
financial audit continues to be pushed into the future). Besides DOD
funds in the two wars ... other agencies have spent huge amounts in the
two countries ... including references to possibly $16B in large pallets
of shrink-wrapped $100 bills disappearing.

comptroller general in the middle of last decade would include in
speeches that nobody in congress was capable of middle school arithmatic
(for what they were doing to the budget) and that medicare part-d comes
to be a long-term $40T unfunded mandate that totally swamps all other
budget items.

The original justification for going into Iraq last decade included that
it would cost $50B ... recent estimates has the price tag pushing $5T
(including long-term veterans benefits) ... a 100 times increase.

Peter Flass <Peter_Flass@Yahoo.com> writes:
I've now read several books by American soldiers and marine who served
there. Despite the losses, they uniformly say that what we were doing
was worth while.

accounts were that some drop-off in fighting was because of
bribes/tributes payed to not fight ... but it picks back up after the
bribes/tributes stop coming. with the final bill for Iraq looking to
push $5T (100 times original claim) ... its not likely that the American
taxpayer is going to continue providing the source of funds for those
bribes/tributes (and lots of corruption and skimming by US
corporations).

fro above:
The U.S. strategic goal for Afghanistan is to defeat and prevent the
return of al Qaeda and its affiliates. Since fiscal year 2002,
U.S. costs reported for U.S. military, U.S. diplomatic, and
reconstruction and relief operations in Afghanistan have been over $500
billion. Given U.S. strategic goals and the level of U.S. resources
expected to support Afghanistan in the future, GAO has identified a
number of key issues for the 113th Congress to consider in developing
oversight agendas and determining the way forward in
Afghanistan. Significant oversight will be needed to help ensure
visibility over the cost and progress of these efforts.

Eisenhower's goodbye speech was warning about the MICC, including the
pentagon/airforce had tried to use claims about the "bomber gap" to
justify 20% increase in DOD budget to build massive numbers of
additional B52s. the important thing about U2 was Eisenhower was able to
use cia u2 photo recon to debunk the MICC "bomber gap" claims.

roll forward and the newcons/team-b are trying to greatly inflate threat
analysis supporting massive MICC spending. Colby, head of the CIA
wouldn't go along; Ford dismisses him and replaces him with somebody
that would go along (Bush1; claims that this is significant factor in trend
getting cia analysis to conform with administration politics). later
several of the necon/team-b members show up last decade in Bush2
administration.
https://en.wikipedia.org/wiki/Team_B

from above:
In 1975, PFIAB members asked CIA Director William Colby to approve the
initiative of producing comparative assessments of the Soviet
threat. Colby refused, stating it was hard "to envisage how an ad hoc
independent group of analysts could prepare a more thorough,
comprehensive assessment of Soviet strategic capabilities than could the
intelligence community."[11] Colby was removed from his position in the
Halloween Massacre; Ford has stated that he, himself, made the decision
alone,[12] but the historiography of the "Halloween Massacre" appears to
support the allegations that Rumsfeld had successfully lobbied for
this.[13]

... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

from above:
According to the authorities, hackers worked their way into bank
databases and erased withdrawal limits on pre-paid debit cards and made
up their own access codes. Then that data would be loaded onto any old
plastic card with a magnetic stripe, even a hotel key card could work.

... snip ...

possibly substituting the same PIN for every card ... in feb. they got
$40M in 10hrs with 36,000 transactions (avg. $1111/transaction) ... they
possibly also have to deal with ATM per transaction limits.

from above:
The first federal study of ATM fraud was 30 years ago, when the use of
computers in the financial community was growing rapidly. At the time,
the Bureau of Justice Statistics found nationwide ATM bank loss from
fraud ranged from $70 and $100 million a year.

from above:
Again, Argonne was at the forefront. Building on its experience with
parallel computing, the laboratory acquired an IBM SP -- the first
scalable, parallel system to offer multiple levels of I/O capability
(the speed to read and write data) essential for increasingly complex
scientific applications

from above:
The IBM 3624 was a late 1970s second-generation automatic teller machine
(ATM), a successor to the IBM 3614. Designed at the IBM Los Gatos lab,
the IBM 3624, along with the later IBM 4732 model, was manufactured at
IBM facilities in Charlotte, North Carolina and Havant, England until
all operations were sold to Diebold, tied to the formation of the
InterBold partnership between IBM and Diebold.

... and ...
One of the most lasting features introduced with the 3624 was the IBM
3624 PIN block format used in transmission of an encrypted personal
identification number (PIN). The PIN functions, with an early commercial
encryption using the DES algorithm, were implemented in two modules -
BQKPERS and BQKCIPH - and their export controlled under the US export
munitions rules.

To allow user selectable PINs it is possible to store a PIN offset
value. The offset is found by subtracting natural PIN from the customer
selected PIN using modulo 10.[7] For example, if the natural PIN is
1234, and the user wishes to have a PIN of 2345, the offset is 1111.

The offset can be stored either on the card track data,[8] or in a
database at the card issuer.

To validate the PIN, the issuing bank calculates the natural PIN as in
the above method, then adds the offset and compares this value to the
entered PIN.

... and ...
In 2002 two PhD students at Cambridge University, Piotr Zielinski and
Mike Bond, discovered a security flaw in the PIN generation system of
the IBM 3624, which was duplicated in most later hardware. Known as the
decimalization table attack, the flaw would allow someone who has access
to a bank's computer system to determine the PIN for an ATM card in an
average of 15 guesses.

... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

Peter Flass <Peter_Flass@Yahoo.com> writes:
There was a long period before the war where he could have hopped on a
plane, escaped to some country that wouldn't extradite him and enjoyed
at least a large part of the hidden billions he had stashed away. I
think we would have been happy enough to see him go that we wouldn't
have pursued him too hard. He wanted power more than anything, in the
end more than his life. It's sad, too, because he could have saved
Iraq from a whole raft of problems.

note that he was heavily supported by past administrations
... especially during the iraq/iran war ... and he apparently thot that
he could go into kuwait with impunity.

one account has sat. photo recon showing iraq marshaling forces for the
kuwait invasion, the white house was notified ... and the administration
responded by saying that iraq wouldn't do any such thing and the analyst
raising the alarm was discredited. it wasn't until the same analyst
raised the alarm that forces were being marshaled on the saudi border
that things got moving.

lots of the details of the US administration support for Iraq during the
80s were scheduled to be released in 2001 (under Presidential Records
Act) ... then the new president signs an executive order keeping them
classified
http://www.garlic.com/~lynn/2013f.html#53 What Makes an Architecture Bizarre?

Anne & Lynn Wheeler <lynn@garlic.com> writes:
one account has sat. photo recon showing iraq marshaling forces for the
kuwait invasion, the white house was notified ... and the administration
responded by saying that iraq wouldn't do any such thing and the analyst
raising the alarm was discredited. it wasn't until the same analyst
raised the alarm that forces were being marshaled on the saudi border
that things got moving.

note that Boyd's Desert Storm plan had Iraqi army captured and disarmed
... but right at the end, they stopped and the Republican Guard was
allowed to escape.

since then there has been various comments attributing the
administration bowing to saudi pressure to let the Republican Guard
escape.

since desert storm was more preemptive action to prevent invasion of
saudi arabia ... that seems to be unlikely. it is much more likely that
there are lots of interests in both saudi arabia and iraq ... and
desert storm was more to keep them from battling each other.

This Is What Winning Looks Like -- My Afghanistan War Diary
http://www.vice.com/vice-news/this-is-what-winning-looks-like-full-length

from above:
The US and British forces are preparing to leave Afghanistan for good
(officially, by the end of 2014), and my time in the country over the
last six years has convinced me that our legacy will be the exact
opposite of what Allen posits -- not a stable Afghanistan, but one at
war with itself yet again. Here are a few encapsulated snapshots of what
I've seen and what we're leaving behind.

from above:
The Marines had planned to slip into Fallujah "as soft as fog." But
after four American contractors were brutally murdered, President Bush
ordered an attack on the city--against the advice of the Marines. The
assault sparked a political firestorm, and the Marines were forced to
withdraw amid controversy and confusion--only to be ordered a second
time to take a city that had become an inferno of hate and the lair of
the archterrorist al-Zarqawi.

... snip ...

a little (Fallujah) inter-service rivalry ... there were some Army
references to fire fights that the Marines didn't stick around for.

What Makes a bridge Bizarre?

hancock4 writes:
But an awful lot of general fund resources have been used to pay for
the construction, maintenance, and operation of major highways.

Don't forget highways require considerable public safety costs to deal
with accidents, and those costs are almost always paid for general
taxes, not road taxes.

Also, highway bonds are usually government backed and have low
interest rates as opposed to private sector bonds.

When a road is expanded, the land used often comes off the tax base,
forcing a tax hike. The benefits do not necessarily accrue to those
paying the taxes, indeed, some may suffer losses from increased
highway congestion.

there was case in cal. where the public utility commission (PUC)
authorized rate increase for PG&E .... part of it was to keep
brush&trees cleared from power lines ... instead PG&E was diverting the
funds to pay executive bonuses and not doing any brush and limb
clearing. there was a brush fire that burned several bldgs. that
started from powerline electric spark as a result of brush clearing
... and PG&E was held liable (since they had taken responsibility for
the brush clearing with the petition for additional rate increase for
the purpose).

except for get out jail free cards for public officials ... any traffic
accidents & deaths as a result of diverting highway trust funds and
resulting lack of road maintenance should subject the responsible
parties to prosecution.

there have been cases the past couple years where some roads&highways
have been privatized ... i wonder how they are treating legal liability
for proper maintenance

when they were building the new 101 highway in south bay ... coyote
valley citizens group convinced the state that it should only be four
lanes for the ten miles through coyote valley; it was six lanes north of
coyote valley in south san jose and six lanes south of coyote valley.
The six->four going north in the morning was a horrendous choke point as
was the four lane stretch all the way through coyote valley ... and it
reversed in the evening going south. The coyote valley citizens
committee had claimed that the six->four lanes would reduce the
congestion through coyote valley ... but it had the exact opposite
effect ... making congestion significantly worse. There was the
equivalent of tens of thousands of dollars aggregate loss by the
additional commute time every day and additional tens of millions by the
state when it finally got around to retrofitting the coyote valley
section to six lanes (compared to what it would have been to make it six
lanes to begin with). is anybody held responsible????

from above:
In 1975, PFIAB members asked CIA Director William Colby to approve the
initiative of producing comparative assessments of the Soviet
threat. Colby refused, stating it was hard "to envisage how an ad hoc
independent group of analysts could prepare a more thorough,
comprehensive assessment of Soviet strategic capabilities than could the
intelligence community."[11] Colby was removed from his position in the
Halloween Massacre; Ford has stated that he, himself, made the decision
alone,[12] but the historiography of the "Halloween Massacre" appears to
support the allegations that Rumsfeld had successfully lobbied for
this.[13]

reference to 2001 executive order preventing release of
presidential papers called for under Presidential records
act ... and reference to US support of Iraq ... including video of
Rumsfield with Saddam

above includes:
Professor Noam Chomsky says the only country to have been granted the
"privilege" of attacking a U.S. warship and getting away with it, other
than Israel in 1967, is Saddam Hussein's Iraq.[43]

... snip ...

also references:
http://www.gwu.edu/~nsarchiv/NSAEBB/NSAEBB82/index.htm

Peter Flass <Peter_Flass@Yahoo.com> writes:
Sounds like Benghazi. I don't know why we bother having field agents
when obviously the fat-assed bureaucrats in DC have a much better idea
of what's "really" going on. They could just pull out their crystal
balls and generate the intelligence.

since then there is increasing references to benghazi was a cia station
... as well as the militerizing of the CIA ... decreasing focus on
intelligence gathering and major spike in numbers of private army black
operators

one could attribute the lack of awareness back in lots of washington
(on what was going on in benghazi) to lots of secrecy, obfuscation and
misdirection ... all part of cia station ... and lack of sufficient
security part of trying to maintain that cover. the level of secrecy,
obfuscation and misdirection then contributes to confusion in other
agencies and most of washington. Subsequent inconsistencies ... then
is cia trying to maintain some cover story ... when one after another
is exposed being inconsistent with facts.

one could possibly explain decline in CIA back to replacing Colby with
Bush1 ... increasingly forcing CIA intelligence to be inline with
administration policies ... which would increasingly devalue real
professional intelligence skills and promote party line conformity
http://www.garlic.com/~lynn/2013g.html#54 What Makes collecting sales taxes Bizarre?

from above:
CIA is incompetent at clandestine and covert operations. This has been
known since at least 1992, but has only become more and more evident
since 9/11. CIA is not only addicted to official cover as a simple
bureaucratic solution totally at odds with its mission, it is
demonstrably incapable of learning how to do non-official cover.

... snip ...

Its not clear then various factions aren't out to make political points
by spinning Benghazi publicly ... when they knew that explanations would
be inconsistent ... because hardly anybody was being told the real story
... but afterwards everybody is told to stay away from any implication
that it was a CIA station.

the spectre of devaluing professional skills and firing people when they
don't conform to some party line ... harkens back to Boyd's To Be or To
Do:

"There are two career paths in front of you, and you have to choose
which path you will follow. One path leads to promotions, titles, and
positions of distinction.... The other path leads to doing things that
are truly significant for the Air Force, but the rewards will quite
often be a kick in the stomach because you may have to cross swords with
the party line on occasion. You can't go down both paths, you have to
choose. Do you want to be a man of distinction or do you want to do
things that really influence the shape of the Air Force? To be or to do,
that is the question." Colonel John R. Boyd, USAF 1927-1997

From the dedication of Boyd Hall, United States Air Force Weapons
School, Nellis Air Force Base, Nevada. 17 September 1999

A Fascinating History of JES2

from above:
From the perspective of long time SHARE volunteer and JES expert Jack
Schudel, I pass along this interesting and entertaining article from
Jack about the history of JES2/3; the names, the color orange, songs
and buttons at SCIDS, and of course, the pioneers.

disclaimer: my wife served a stint in the early JES group ... she was
catcher for ASP->JES3 and was co-author of JESUS design ... JES
Unified System ... the best of JES2&JES3 that neither customer set
could live w/o (for all sorts of reasons, JESUS never happened). she
then went on to serve a stint in POK in charge of loosely-coupled
architecture.

note that NJE/NJI ... used left over entries in the 255 entry psuedo
unit record table to define network nodes ... typically around 160
... however the internal network had relatively early exceeded 200
nodes ... forcing JES2 to purely boundary nodes (JES2 would trash
traffic where the origin and/or the destination nodes didn't appear in
its table).

fortunately the internal network was based on rscs/vnet which also had
a very clean layered architecture ... as a result it was relatively
trivial to implement a wide variety of drivers ... including ones that
simulated NJE. This was used to great advantage in JES2 because the
implementation had jumbled together job control fields and network
layer field ... and JES2 systems at different release levels ... that
talked directly to each other ... would frequently result in taking
down/crashing the whole MVS system. It then became responsibility of
internal VNET/RSCS to have a whole library of JES2 drivers ... and it
became the responsibility of VNET/RSCS to start the JES2 driver
... talking directly to a JES2 system ... to convert all JES2 headers
to the format exactly required by that specific JES2 system release
(as countermeasure to different JES2 systems at different release levels
constantly crashing each other). There is the infamous case of Hursely
MVS systems crashing because of files arriving from San Jose MVS JES2
system ... and the local Hursely VNET/RSCS system being blamed
... because it wasn't correctly converting the header fields (as
countermeasure to JES2 systems at different release levels crashing each
other).

Prior to December 1976 ... VNET/RSCS wasn't going to be announced as
product ... because it was in the wake of Future System failure and
POK was in the process of convincing corporate to kill the vm370
product and transfer all the people to POK to work on MVS/XA. JES2 NJE
wasn't going to be announced because it had to be charged for ... and
requirement was customer charges times the forecast had to cover the
costs ... and there was no customer forecast times any price that
covered the JES2 NJE costs (company was still adapting to 23Jun69
unbundling announcement where non-kernel software was being charged
for). A deal was then cut to make a joint JES2/NJE plus RSCS/VNET
product announcement. RSCS/VNET total product costs was so small and
the forecast was so large ... that it would have been possible to ship
at $30/month and still meet all product pricing requirements. As a
result, a joint product announcement at $600/month ... resulted in
joint revenue that covered joint costs (in effect RSCS/VNET revenue
was subsidizing the JES2/NJE product announce).

trivia: the internal network was larger than the arpanet/internet from
just about the beginning until late 85 or early 86. by the time
JES2/NJE got around to supporting 999 nodes, the internal network was
over 1000 nodes and by the time JES2/NJE extended to 1999 node
support, the internal network was over 2000 nodes.

What Makes collecting sales taxes Bizarre?

mausg writes:
Not mentioned much, the US retains one of the largest military bases
in the world in Bagdad. The present Iraqi governemnt is Shia based,
resented in the Sunni areas, (generlly the West and North). The oil
industry is in the hands of multinationals, basically, the US with
token outsiders. (One of the suspeccted reasons for the war was that
Saddam was selling oil without using dollars).

According to some of Cheneys recent remarks, Dick was basically in
charge, with GWB nodding agreement as Dick made plans)>(analogy with
Ludendorff-Hindenburg in WW1). The elections were basically dodgy,
former Baath(Saddam) members were not allowed to go fforward.

from above:
The momentum towards sectarian war in Iraq might have been stopped by
political reforms that provided for more equitable power sharing with
the Sunni political parties. The al Maliki government, instead, treated
Sunni political protestors as terrorists and Baathists.

Now the time for compromise appears to have passed. One ripple effect of
the fighting in Syria is that Sunni groups in Iraq have become
emboldened to fight the Shiite-dominated government in Baghdad.

... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

jmfbahciv <See.above@aol.com> writes:
Were you running the OS you were developing when you were moving
files? We did.

In our shop we didn't usually move a file. It was usually lots of files,
like a whole OS' worth. It was easier and faster to move a disk
pack which you have verified has all the files you need.

at CSC, I added to the process of building production cp67 system backup
tape ... normally the 80x80 card image of the TXT (output of assembler)
with BPS stand-alone loader was written to tape. This image could be
booted and the first thing it did was write a (bootable) core image to
disk.

Running this in virtual machine had more time and access to all the
files ... so I just added to the process to append all the source (&
utilities) that was used to create the TXT files for the bootable
image. A new system build was also its own backup tape.

ongoing attempts at debunking budget item that is projected to eventually
reach trillion dollars and analysis that it will never meet any of
its stated requirements
http://elpdefensenews.blogspot.com/2013/05/quick-leap-quickstep-quick-sand.html
http://elpdefensenews.blogspot.com/2013/05/little-of-what-navy-says-about-f-35-can.html
http://elpdefensenews.blogspot.com/2013/05/operationbovine-f-35-pr-tour-at-fort.html

even Stockman weighs in on MICC budget items

The Great Deformation: The Corruption of Capitalism in America
http://www.amazon.com/Great-Deformation-Corruption-Capitalism-ebook/dp/B00B3M3UK6pg693/14862-64:
As indicated earlier, the markers of irrational perpetuation of
senseless military spending are everywhere: the DOD budget continues to
modernize M1 battle tanks each year when there is no real need for most
of the 9,000 ultra-lethal tracked machines we already have.

DEC and the Bell System?

Peter Flass <Peter_Flass@Yahoo.com> writes:
The PDP-11 was introduced in 1970. The IBM Series/1 didn't come out
until 1976 (according to Wikipedia). The Series/1 was a good box, but
obviously too late for AT&T. Besides, although I haven't compared
them in detail it's possible that IBM hobbled the Series/1 so as not
to cut in to their mainframe business, or at least there might have
been a perception of such.

there is joke that the officially released system for series/1 was some
number of old kingston os/360 MFT developers retiring to Boca and
attempting to recreate MFT as RPS ... a heavy weight and very slow
implementation

with T1 and faster speed links. None of the 37x5 boxes supported
anything faster than 56kbit ... the last mainframe controller to even
support T1 was the 2701 ... and customers were starting to have
problems keeping the (in some cases 20+yr old) boxes in service

however, there was a special, custom series/1 "Zirpel" card done by FSD
for gov. contracts. one of the strings for some of the HSDT funding
... was that I also be able to demo series/1 w/zirpel cards running
at T1 ... which met that I had get some series/1 boxes.

it turns out that a former co-worker at IBM was then at ROLM running
dataprocessing operations (and responsible for all the equipment orders)
... and I had to do a little horse-trading ... to get a couple of their
series/1 allocation.

DEC and the Bell System?

John Levine <johnl@iecc.com> writes:
You couldn't easily use the 1130 for realtime work. Its big brother
the IBM 1800, which was program compatible and much faster and had
better I/O was a fine realtime system but was very expensive.

One of the things that DEC did right was to make it easy to attach
random stuff to their computers. The I/O buses were simple, and you
could build interfaces from the same kinds of modules used to build
the computers.

Dan Espen <despen@verizon.net> writes:
So a brutal Sunni (Saddam) took control, and killed, tortured, did
whatever he wanted. But there was a general peace in the country
except for a couple of wars Saddam started and a couple of uprisings.

from above:
On June 9, 1992, Ted Koppel reported on ABC's Nightline, "It is becoming
increasingly clear that George Bush, operating largely behind the scenes
throughout the 1980s, initiated and supported much of the financing,
intelligence, and military help that built Saddam's Iraq into the power
it became",[5] and "Reagan/Bush administrations permitted -- and
frequently encouraged -- the flow of money, agricultural credits,
dual-use technology, chemicals, and weapons to Iraq."[6]

... snip ...

a lot of information on much of the above was to be released in 2001
under the "Presidential Records Act" until bush2 signed an "Executive
Order" keeping it classifed ...
http://www.garlic.com/~lynn/2013f.html#53 What Makes an Architecture Bizarre?

from above:
It is also an open secret that the Rios Montt's regime was part of a
string of puppet dictatorships, military and civilian, that carried out
policies dictated by the US government, in this case, the administration
of Ronald Reagan. With the overthrow of the Somoza dictatorship and the
installation of a Sandinista government in neighboring Nicaragua, the
bloodbaths multiplied.
/cite>
... snip ...

from above:
In 1975, PFIAB members asked CIA Director William Colby to approve the
initiative of producing comparative assessments of the Soviet
threat. Colby refused, stating it was hard "to envisage how an ad hoc
independent group of analysts could prepare a more thorough,
comprehensive assessment of Soviet strategic capabilities than could the
intelligence community."[11] Colby was removed from his position in the
Halloween Massacre; Ford has stated that he, himself, made the decision
alone,[12] but the historiography of the "Halloween Massacre" appears to
support the allegations that Rumsfeld had successfully lobbied for
this.[13]

from above (including Boyd and many of his acolytes)
Some people, such as the following authors, have inferred, insinuated,
or suggested that entering a state of perpetual war becomes
progressively easier in a modern democratic republic, such as the United
States, due to the development of a relationship network between people
who wield political and economic power also owning capital in companies
that financially profit from war, lobby for war, and influence public
opinion of war through influence of Mass media outlets that control the
presentation for the causes of war, the effects of war, and the
Censorship of war: (1) "The Iron Triangle: Inside the Secret World of
the Carlyle Group" (2004) by Dan Briody; (2) "The Pentagon Labyrinth:
10 Short Essays to Help You Through It" (2011) an anthology by nine
authors who are Pierre M. Sprey, George Wilson[disambiguation needed],
Franklin C. Spinney, Bruce I. Gudmundsson, Col. G. I. Wilson, Col. Chet
Richards, Andrew Cockburn, Thomas Christie, and Winslow T. Wheeler; (3)
"Prophets of War: Lockheed Martin and the Making of the
Military-Industrial Complex" (2010), by William D. Hartung; ...
... snip ...

What Makes code storage management so cool?

jmfbahciv <See.above@aol.com> writes:
We didn't do that. The collecting of files for each tape was the job
of an entire group. they serviced all PDP-10 groups and were the
go-between between us and SDC. The group started with a software
engineer running it. Over the years, it devolved into mostly wage
class 2 thinking types. I finally proved that an engineer had to
be involved in the planning of distribution. Release Engineering could
do the work but the engineer was needed to decide what bits and how
they should flow into Release Engineering and onto the distribution
media. There finally was a requirement for a packaging plan for
each major distribution.

at the time, I was the only one involved in producing production cp67
systems at the science center ... in addition for "release engineering"
for internal datacenters ... and well as validation & regression
testing. I've mentioned before various internal datacenters was moving
to vm370, the internal cp67 datacenters started to drop off. it was also
during this period that lots of 370 related stuff was suspended in favor
of future system activity (I continued working on 360/370 during the
period, even periodically ridiculing future system) ... misc. past posts
http://www.garlic.com/~lynn/submain.html#futuresys

with csc on 4th flr and multics on 5th flr ... we would periodically kid
each other. I've mentioned that it would be fair to compare the total
number of multics installed with the total number of vm370 customer
installs or even the total number of vm370 internal datacenters.
However, I've compared that the total number of multics systems that
ever existed in the history of the world ... were almost as many as the
number of internal vm370 datacenters running my csc/vm.

after failure of FS there was mad rush to get stuff back into the 370
product pipelines ... which contributed to the official development
group picking up various pieces of csc/vm for product distribution. it
also contributed to deciding to release other parts of my csc/vm as the
"resource manager" (separate kernel component and guinee pig for
starting to price for kernel software) ... some past posts
http://www.garlic.com/~lynn/subtopic.html#fairshare

CSC was classified as part of the field ... and there was program that
priced software products by field locations ... the developer(s) got the
first month's rental of each sale (and some of CSC members had gotten
revenue as part of shipping various software applications). A week or
two before the release of the Resource Manager, CSC was reclassified as
a hdqtrs location and members were no longer eligible for the program).
The resource manager was priced at $895/month and almost immediately
went to 1000 customers ($895,000) ... I even offerred to forego my
regular salary to be allowed to participate.

Standard vm370 product had monthly customer distribution release tape
called "PLC" ... which had all integrated changes&fixes released that
month (both executables and full source updates). They tried to get me
to do a concurrent release of the resource manager ... integrated with
the standard product "PLC". I convinced them that since I was the only
person involved in the Resource Manager ... including doing extensive
performance regression tests for every release (something that even the
official development group didn't do) ... as well as having a lot of
other responsibilities ... I couldn't turn out a Resource Manager PLC
more than once every three months. Final regression before initial
release of Resource Manager involved over 2000 automated tests that took
over three months elapsed time to run. For incremental PLCs ... I would
do a set of 100-200 tests. Misc. past posts mentioning automated
benchmarks
http://www.garlic.com/~lynn/submain.html#benchmark

--
virtualization experience starting Jan1968, online at home since Mar1970

The Vindication of Barb

scott@slp53.sl.home (Scott Lurndal) writes:
As I understand it, fault tolerance was a huge design criteria in MVS and
successors. Pre-condition/post-condition checking in most OS routines,
a large amount of research into software fault tolerance techniques, et. alia.

After moving to SJR ... they let me wander around other locations in San
Jose area. One of the places were disk engineering development and
product tests ... bldgs. 14&15 across the street from SJR, bldg 28
... recent sat. photos show bldgs14&15 still standing but bldg.28
is plowed under ... recent post with image URL
http://www.garlic.com/~lynn/2013g.html#27 Old data storage or data base

at the time, they had several mainframes being scheduled 7x24 around the
clock for stand-alone development testing. at one point they had tried
running MVS for concurrent testing ... but found MVS had 15min MTBF in
that environment (requiring manual re-ipl/reboot). I offerred to rewrite
I/O supervisor to be bullet-proof and never fail ... which greatly
improved productivity since they now had on-demand, anytime, concurrent
testing. I then did an internal document describing all the work ...
and made the mistake of mentioning the MVS 15min MTBF ... which brought
down the wrath of the MVS group on my head.

(I've characterized before that) the MVS group would have gotten me fired
if they could have figured out how ... but they did try and make my life
unpleasant ... including squashing a corporate award for the engineering
work

Dave Garland <dave.garland@wizinfo.com> writes:
I dunno about the USSR. That was after Lenin died. Trotsky did get
killed eventually, though Trotsky's defeat by Stalin was gradual over
a period of years.

for other drift, "The Wars for Asia, 1911-1949" mentions that during
ww2, 3/4s of german military resources went against USSR and 2/3rds of
japanese military resources were on mainland china ... significantly
reducing what the rest of the allies had to face.
http://www.garlic.com/~lynn/2013e.html#10 The Knowledge Economy Two Classes of Workers

--
virtualization experience starting Jan1968, online at home since Mar1970

What Makes collecting sales taxes Bizarre?

Peter Flass <Peter_Flass@Yahoo.com> writes:
Sounds like the Scots and the Irish. Glorious defeats.

BBC had program on genocide by English ... in the case of the Scots
... those they didn't kill off were disposesed ... leaving immigration
about the only option. it did juxtaposition tales about all the Scots
volunteering for military service during WW1 ... with military service
was nearly the only option left for those that didn't immigrate.

there are also references that after ww1, it was the English that
developed the technique of airplanes straffing civilian populations in
Iraq ... several online references to this 2003 WSJ article (search
web for quotes if you don't have access)
http://online.wsj.com/article_email/0,,SB104802384328857700,00.html

To suppress later rebellions by Iraq's Kurds, the British invented the
technique of strafing civilian rebels from the air. As for Gen. Maude,
he succumbed to cholera eight months after he reached Baghdad.
... snip ...

it was being shipped out of the science center ... 4th flr of 545 tech
sq ... had less than 40people total.

also, as part of guinee pig for starting to charge for kernel software
... I had to spend a lot of time with legal & business people on kernel
software charging policies.

note that standard organization was purely oriented to failure-like
testing ... a lot of the benchmarking was verifying performance related
characteristics. standard product organizations rarely did any detailed
performance validation ... they were going to do the minimum necessary
to get it out the door.

official development groups could have hundreds of people doing such
thing ... but not on the 4th flr. the mechanics of duplicating material
and documentation and actual keeping track of what customers had bought
what ... and who needed to be shippped what ... was PID ... all I had to
do was ship single copy to PID with appropriate identifier ... and they
handled all the rest.

however, for the first year ... I was also 2nd & 3rd (etc) level
support. customers reporting any problem with contact first level
support ... but nearly everything on the resource manager would then be
referred to me.

--
virtualization experience starting Jan1968, online at home since Mar1970

One issue would be if they knew beforehand that the legislation would
result in loopholes ... either because they helped design the
loopholes ... or they withheld knowledge that the legislation would
result in loopholes (aka collusion). This is significantly different
than uncovering the loopholes long after the fact.

In the US there is analysis of the benefit of flat-tax ... not because
of any direct benefits of the flat-tax ... but it would result in
reducing 72,000 page taxcode to 400 pages ... the claim is the current
complexity of the taxcode costs 6% of GDP ... which would be recovered
by going to simpler taxcode. However, the predictions are that this is
unlikely to happen since by far the largest piles of money spread
around capital hill tend to be related to tax matters and tax
loopholes (contributing significantly to claims that congress is the
most corrupt institution on earth).

"A desire to protect the oppressed"???? the more common scenario is
that there is as much as $30T hidden offshore ... in large part of
from extremely corrupt officials and executives plundering their
countries.

question is how much of the attention on large internet companies is
misdirection from other more egregious activity

David Cameron warns overseas territories on tax; Prime Minister David
Cameron has urged British overseas territories to "get their house in
order" and sign up to international treaties on tax.
http://www.bbc.co.uk/news/business-22592662

spends quite a bit of time on reform of the monetary system can fund
gov. operations w/o requiring income taxes.

that still leaves all the rest of fraud & manipulation that goes on by
the too-big-to-fail ... like libor and nearly free license to money
launder for terrorists and drug cartels

This goes along with the jokes about SEC fines that even when they are
hundreds of millions (and repeated consent degrees that are violated
time after time) ... that they are such a small percentage of the
actual amounts involved ... they come to be viewed as small cost of
doing business .... steal enough money and you never have to go to
jail.

Supposedly Sarbanes-Oxley was passed based on the claim that it would
prevent future ENRONs and WORLDCOMs ... and that both executives and
auditors would do jail time for invalid public company financial
filings. Possibly because even GAO didn't believe SEC was doing
anything ... it started doing reports of fraudulent public company
financial filings ... even showing increase after Sarbanes-Oxley (and
nobody doing jail time). There was joke seen on the internet: "ENRON
was dry run and worked so well that it has become institutionalized".

Tax evasion costs EU states 1tn euros ($1.3tn; Â£0.85tn) a year, more
than was spent on healthcare in 2008.

... snip ...

above mentions ongoing senate hearings ... however big part of the
claim that congress is the most corrupt institution on earth involves
its sale of tax loopholes ... with another claim about the paying
congress for tax loopholes has by far the largest ROI of all possible
business investments. In the past couple years it would appear that
congress has been paying attention to these articles along with seeing
the enormous amount of money for businesses as a result of the
loopholes ... that congress is trying to come up with a way of
changing loopholes from a one time payment to re-occuring payment
every year ... if possible even a percentage of the resulting windfall
to businesses.

some of the commentary going on during the hearings was implying that
the special interests paying for the loopholes (and in numerous cases
actually drafting the wording for the loopholes) were totally
unrelated to the special interests using the loopholes ... which seems
in direct contradiction to the articles about businesses buying
loopholes carries the highest ROI for any business investment.

a few years ago (I think) CBS(?) was covering an annual economic
conference. part of the broadcast was a a group of economists sitting
around discussing the benefits of flat-tax. they had two major points
... unrelated to any direct flat-tax benefits. one was that it would
eliminate the selling of tax loop-holes, the major contributor to
congress being considered the most corrupt institution on
earth. another was tax loop-holes are the major contributor to tax
code being enormously complex and 65,000 pages (at the time)
... dealing with complexity costing significant part of GDP; flat-tax
would cut the code back to 400-500 pages, the simplification would gain a
direct 3% in GDP and another 3% gain in GDP with decisions being based
on business rather than tax-code. semi-humorously, they mentioned that
the major entity lobbying against the elimination of tax loop-holes
was Ireland.

--
virtualization experience starting Jan1968, online at home since Mar1970

a few years ago (I think) CBS(?) was covering an annual economic
conference. part of the broadcast was a a group of economists sitting
around discussing the benefits of flat-tax. they had two major points
... unrelated to any direct flat-tax benefits. one was that it would
eliminate the selling of tax loop-holes, the major contributor to
congress being considered the most corrupt institution on
earth. another was tax loop-holes are the major contributor to tax
code being enormously complex and 65,000 pages (at the time)
... dealing with complexity costing significant part of GDP; flat-tax
would cut the code back to 400-500 pages, the simplification would
gain a direct 3% in GDP and another 3% gain in GDP with decisions
being based on business rather than tax-code. semi-humorously, they
mentioned that the major entity lobbying against the elimination of
tax loop-holes was Ireland.

What Makes collecting sales taxes Bizarre?

jmfbahciv <See.above@aol.com> writes:
The Iraqi war morphed into a civil war. Extremists are also taking
over in the other countries which ousted their regime.

more like it was always there ... US press just spent more time
reporting Iraqi war ... also public has very short memories ... doesn't
even remember Iraq/Iran war ... with US providing major support to Iraq
... and then Iran/Contra affair ... basically selling weapons to both
sides (and supporting genocide regimes in the western hemisphere).

What Makes travel Bizarre?

jmfbahciv <See.above@aol.com> writes:
I thought so, too. but there were news items about how companies
found it to be cheaper than long-haul trucking. I don't hear as
much about long-hauling as I used to when I was growing up.

The (IBM) Dallas E&S center came out with publication comparing T/R and
Ethernet .... conjecture they had used early 3mbit Ethernet before
"listen before transmit" from early 70s compared to mid-80s 16mbit t/r
(nearly 15yr time dislocation). At the time of the E&S publication
... Almaden new bldg which had been heavily wired with CAT5 for T/R
... saw higher aggregate LAN throughput and lower latency with 10mbit
Ethernet (on the CAT5) than 16mbit T/R (aka effective 16mbit T/R
aggregate throughput was less than half media throughput, in part
because of the token passing latency)

Disclaimer: in late 70s, my wife was co-inventor on (IBM) token-passing
patent.

IBM RS/6000 had a similar but different problem. For its precursor,
PC/RT, the workstation division had done its own 4mbit T/R card (for
PC/RT AT-bus). Corporate hdqtrs directed the workstation division that
it had to use PC microchannel cards for the RS/6000 (microchannel bus,
and couldn't do their own cards). The PC microchannel 16mbit T/R card
design point was 300+ stations all sharing the same 16mbit bandwidth
doing terminal emulation. As a result, the 16mbit T/R microchannel
"per-card" throughput was less than the PC/RT 4mbit T/R card (this is
separate issue from aggregate 16mbit T/R LAN being less than 10mbit
Ethernet) ... making a PC/RT server with 4mbit T/R card faster than an
RS/6000 server with a 16mbit T/R card (a non-IBM vendor 10mbit Ethernet
microchannel card would be significantly faster than both the PC/RT
4mbit T/R card as well as the PC 16mbit T/R card)

ISAM was 60s ... heavily used CKD architecture ... 60s trade-off of
I/O resources against scarce processor capacity and processor memory
... structure could all be out on disk and data structure that was
searched by convoluted channel program ... and even modifying the
channel program on the fly during execution.

This was big problem for virtual machine CP67 ... since channel
programs required real addresses ... when the virtual machine did SIO
... CP67 had to simulate it by decoding the virtual machine channel
program and building a copy in real storage that substituted real
addresses for virtual addresses. Some of the ISAM convoluted channel
programs gave CP67 fits.

The move of the other os/360 operating systems to 370 virtual memory
faced the same problems. OS/360 access methods executed in application
space building necessary channel programs and then doing EXCP (svc0)
to do the SIO for channel program execution. The initial move of MVT
to virtual memory was OS/VS2 Release 1 (SVS ... single virtual
storage). It was initially done on 360/67 with some code cribbed into
the side of MVT to build a single 16mbyte virtual address space and a
little bit of code to handle page interrupts and do page I/O. The
majority of the MVT code changes was cribbing CP67's CCWTRANs into
EXCP processing for the building of channel program copies.

By the mid-70s, the IO channel - processor resources trade-off was
starting to invert and convoluted channel programs (and CKD) were
becoming obsolete ... performance was much better with
straight-forward channel programs built from data structures cached in
processor memory. This is in addition to the heavy penalty that
convoluted channel programs represented for EXCP processing creating
copies of channel programs with real addresses for actual execution.

The CKD convoluted channel program architecture also placed a heavy
performance penalty on channel processing ... since the 60s CKD
architecture allowed a CCW to modify the immediate following CCW
... channel processing specification required purely serial,
one-at-a-time CCW execution ... the following CCW couldn't even be
fetched until the current CCW had finished processing. This resulted
in lots of serialized latency delays between channel I/O processing
having to wait until the previous CCW had finished execution before
the next CCW could be fetched from processor memory.

Upthread I mention doing channel extender support in 1980, allowing
IMS group being moved offsite and they being able to have local
channel attached 3270 controllers (they found the standard product
remote 3270 intolerable), this violated basic underlying channel
program architecture ... since the complete channel program was
preloaded to the remote location. It wouldn't work with any convoluted
channel programs where a CCW would modify a following CCW (in the same
channel program) ... since the change would show up in the processor
memory but wouldn't show up in the copy that had been initially
downloaded.

Having to support strict/serialized channel processing architecture
for convoluted channel programs was one of the reasons that ESCON was
already obsolete by the time it was released in the early 90s with
ES/9000

part of the upthread mention in 1988 being asked to help LLNL
standardize what becomes FCS ... was allowing a complete I/O request
to be downloaded to the remote end concurrent with the initial I/O
request start. That eliminated lots latency with back&forth
processing for I/O request ... sending an I/O request initiation
included everything to perform the I/O request at the remote end
... the dual simplex 1gbit/sec transfer then run asynchronously at
full media transfer speed ... eliminating all of the end-to-end
mainframe channel program protocol chatter latency. This was in place
at the time 17mbyte/sec ESCON ships. Later FICON layers the channel
program chatter latency penalty on top of native FCS ... significantly
cutting throughput (compared to native FCS).

from above:
Rosner focuses on Articles I and II of Dodd Frank and describes how
their plans to deal with resolving large firms has only made matters
worse. It' s key to understand that these two sections are somewhat at
odds with each other. Dodd Frank peculiarly provides for two ways to
wind up systemically important firms. Title I says they should prepare
for bankruptcy. They need to clean up how they are organized and make
sure activities fit or can be mapped into legal entities and prepare
living wills, which are plans for how they would wind themselves
up. But confusingly, banks can also be "resolved" which is more like
"rescued with a little pain inflicted on investors" under Title
II. Title II provides for a second way to deal with stressed financial
firms, which includes having the government provide what amounts to
debtor-in-possession financing while the bank is restructured. This,
sports fans, is what is otherwise known as a bailout.

Are Treasury and the Fed at Odds Over Big Banks? Treasury Secretary
Lew keeps hands off as Wall Street giants grow larger.
http://www.nationaljournal.com/politics/are-treasury-and-the-fed-at-odds-over-big-banks-20130524

from above:
But at his 2010 Senate confirmation hearing to become head of Office
of Management and Budget, Lew also indicated that he didn't consider
the deregulation of Wall Street to be a "proximate" cause of the
financial crisis --an answer that put him at odds with his boss, who
declared as a presidential candidate in 2008: "It's because of
deregulation that Wall Street was able to engage in the kind of
irresponsible actions that have caused this financial crisis."

from above:
Dodd-Frank set up a system to unwind troubled institutions when they
become troubled. But it requires regulators taking a really firm stand
against large, politically-interconnected, and powerful companies
... I just think it's too easy to put the taxpayer on the hook and
bail these people out

... snip ...

Most of the comments are that the regulators have been "captured"
(along with lots of revolving doors, many of the regulators coming
from the industry they are to regulate and/or heading to the
industry). This also bleeds over into congress being considered the
most corrupt institution on earth ... considering the leverage they
have on the whole regulatory process.

In 2009, there was press about IRS going after 52,000 wealthy
Americans involved in (illegal) tax evasion. In 2011, there was press
that congress was cutting the IRS budget for investigating/prosecuting
tax evasion (even though this was area where the amounts being
recovered, hugely exceeded the cost of the recovery). And this is
separate from the issue of congress selling tax loopholes making many
forms of tax evasion legal.

a few years ago there was some TV coverage of an annual economic
conference. part of the broadcast was economists discussing the
benefits of flat-tax. they had two major points ... unrelated to any
direct flat-tax benefits. one was that it would eliminate the selling
of tax loop-holes, the major contributor to congress being considered
the most corrupt institution on earth. another was tax loop-holes are
the major contributor to tax code being enormously complex and dealing
with complexity costing significant part of GDP; flat-tax
simplification would gain a direct 3% in GDP and another 3% gain in
GDP with decisions being based on business rather than
tax-code. semi-humorously, they mentioned that major entity lobbying
against the elimination of tax loop-holes was Ireland.

current public hearings aren't turning up anything new, conjecture is
congress wants to reform special interest one time payments for tax
loopholes to something more like annual payments to congress by
special interests for the tax loopholes (even the simple fact of
holding public hearings can significantly increase money flowing into
congress).

upthread mention getting tasked with supporting/debugging original
CICS product beta-test library online catalog effort that was
CICS/BDAM implementation. mid-90s we get to go into NIH national
library of medicine ... was BDAM implementation with their own
monitor. turns out two of the people that had done the original
implementation (about the same time I was involved with the
univ. library project as undergraduate) were still there.

Design was medical knowledge was indexed in 80 or so different
categories (author, subject, topic, keywords, etc). Each index item
had a record built from the BDAM record numbers of the corresponding
articles. Boolean AND&ORs searches were done with merge and intersect
of the article BDAM record numbers. NIH had already encountered
problem with boolean search issues with massive number of things by
the early 80s ... that the web search engines were going to start
encountering in the late 90s.

Part of NLM was UMLS ... the (ontology and taxonomy) mesh & hierarchy
of terms that was used to classify medical knowledge (each term would
have a corresponding record of all the BDAM record numbers of the
corresponding articles). By the early 80s, there was starting to be
growing number of professional librarians skilled in searching NLM
... the holy grail was finding magic combination of terms that
resulted in only few hundred items ... frequently searches were
bi-model zero responses or hundreds of thousands of responses ... with
only minor change in search swinging between the two extremes. The
"Grateful Med" interface starting in the early 80s ... by default
.... didn't return the actual responses ... just the number. "Grateful
Med" was used to try a large number of query variations until there
was a response that had greater than zero but less than thousands.

While the NLM BDAM design resulted in incredibly fast response to
individual queries ... it was still very human intensive
... professional librarian taking 2-3days elapsed time to come up with
useable search response.

...

also mid-90s ... we were brought in to large airline res system to
look at 10 impossible things they couldn't do. ACP/TPF based ... still
using disk lookup paradigm from the 60s ... however lots of master
stuff was then being maintained on DB2 ... and ACP/TPF stuff
periodically rebuilt from the DB2 version. Route finding was prebuilt
database of most common origin/destination pairs. one of the problems
was for all possible commercial flts in the world, the database size
exploded beyond manageable limits.

I had just come off a project migrating an internal 50k statement
vs/pascal VLSI layout from rs/6000 to other vendor platforms (IBM
coming off going into the red was moving to COTS for lot of chip tools
... which included handing off some number of internal tools to
commercial chip tool companies). It turns out that the complete OAG
(all commercial flt segments for every airline in the world) could fit
in processor memory (with a little work on information representation)
... and dynamically calculate the possible origin/destination flt
possibilities in less pathlength than a TPF DBMS lookup. At the time,
demonstrated that it was possible to handle the routes lookup for the
whole world on ten rs/6000 590s by changing from DBMS paradigm to
calculate paradigm. Ten yrs later it would fit on smartphone.

I had returned after a month with the new paradigm implementation. It
turns out it wasn't what they actually wanted ... they had nearly
thousand people supporting existing paradigm ... and the changed
paradigm eliminated all of those positions. The new paradigm ran 100
times faster than the DBMS lookup. Also typical agent operation
involved three different queries ... and was also able to collapse all
three into a single query (by doing additional processing). The new
single query then only ran ten times faster than any of the previous
queries ... but only needed 1/3rd as many (also eliminating a lot of
specialized agent skill in threading together the correct query
sequence).

cp67 didn't have real erep ... but starting with vm370 it did ... in
fact various of the unix ports to mainframe ... field engineering
mandated running under vm370 in order to have erep (hardware errors
recording on behalf of the virtual machine) ... aka adding mainframe
erep to unix port was several times larger effort than the
straight-forward port.

--
virtualization experience starting Jan1968, online at home since Mar1970

This was the argument against auto safety, bumpers, safety glass,
crush zones, head rests, seat belts, air bags, guard rails, etc. Part
of the issue is that even with chicken coops and fences keeping the
wolves out ... any of the chickens can actually morph to a wolf. The
lack of security has been there all along ... it isn't that the
security gets worse ... its that the exploits may get worse (and/or
may be publicized)

...

The big news in data breaches is the ones involving financial
transaction information repositories where the crooks can leverage the
information for fraudulent financial transactions. We were
tangentially involved in the (original) cal. data breach notification
law. The issue is that security measures are normally taken in self
interest ... the problem was that the entities having the breaches
weren't at risk for the fraudulent financial transactions ... and
therefor little or nothing was being done; it was hoped that publicity
from the the notifications might motivate breach countermeasures (as
well as allow the individuals at risk for fraudulent financial
transactions take action like closing their accounts).

Also, in the mid-90s we were invited to take part in the x9a10
financial standards working group which had been given the requirement
to preserve the integrity of the financial infrastructure for *ALL*
retail payments. We did detailed, end-to-end, threat and vulnerability
studies (including the breach issues). As a result we developed some
metaphors:

security-proportional-to-risk ... the value of the financial
transaction information to the merchant is the profit from the
transaction possibly a couple dollars (to the processor possibly only
a few cents). The value of the transaction information to the crooks
is the account balance/credit-limit. As a result the crooks can afford
to outspend the defenders (merchants/processors) by a factor of a
hundred times or more.

dual-use ... the information in transaction logs needs to be kept
completely confidential and never divulged (as countermeasure to
crooks using the information for fraudulent transactions). at the same
time the information is required in dozen of business processes at
millions of locations around the planet. We've commented that because
the diametrically opposing requirements, the even if the planet were
buried under miles of information hiding encryption ... it still
wouldn't prevent/stop information leakage.

What x9a10 did was to author the x9.59 standard that slightly tweaked
the paradigm and eliminated the dual-use characteristic ... crooks
could no longer leverage previous transaction information for
fraudulent financial transactions. It also eliminated the risk from
breaches (as well as skimming & evesdropping attacks)... it didn't
eliminate breachs (or skimming & evesdropping) ... but it eliminated
the risk of crooks being able to use the information for fraudulent
transactions (and therefor the motivation for crooks to perform such
operations ... aka the information was no longer worth hundreds of
times more to the crooks than the merchants or processors).

... note that since the cal. state data breach notification
legislation, many other states have passed similar legislation. At the
federal level, in the same period, there have also been
several data breach notification bills ... about evenly divided
between those similar to the original cal. state legislation and those
that would effectively eliminate any notification. (and preempt state
requirements for notification).

as an aside ... long ago and far away, we had been brought in as
consultants to small client/server startup that wanted to do payment
transactions on servers ... they had also invented this technology
called "SSL" they wanted to use; the result is now frequently called
"electronic commerce". Now the major use of "SSL" in the world today
is this earlier work we had done for "electronic commerce" .... to
hide the transaction information (as countermeasure to crooks
being able to use the information for fraudulent financial
transactions). The x9.59 tweak to the paradigm ... eliminating the
requirement to hide such transaction information ... then also
eliminates the major use of "SSL" in the world today.

--
virtualization experience starting Jan1968, online at home since Mar1970

from above:
The committee revealed that Apple's Irish companies, some of which are
not tax resident in any jurisdiction, allowed the group to pay no tax
on much of its overseas earnings in recent years.

In "Prophets of War" ... one of the scenarios MICC had to maintain
quarterly profits after the fall of Soviet Union, was expanding NATO
to include former Soviet block countries ... requiring them to buy
compatible arms (from US weapon merchants) underwritten by USAID. In
one of the rounds of NATO expansion, candidate countries were told
that it would boost their case (to join NATO) if they voted in the UN
for the Invasion of Iraq

A big issue in the processor rate increase is that when the latency to
main memory (cache miss) is expressed in terms of processor cycles
.... it is on the same order of magnitude as 1960s latency access to
disk (when measured in number of 360 processor cycles).

For decades RISC processors have had big performance advantage over
x86 because RISC processors had pioneered out-of-order and speculative
execution as compensation for the increasing "penalty" for accessing
memory (cache miss when measured in number of processor cycles)
... aka effectively trying to treat instructions as independent tasks
and allowing them to execute concurrently (slightly analogous to 1960s
CICS tasking).

Another technique is hyperthreading ... having the execution units
feed from two (or more) different instruction streams. Disclaimer, I
was actually involved in something similar for the 370/195 (in the
mid-70s ... that never shipped to customers) Basically simulating two
processor machine and having two independent instruction streams
feeding common set of execution units. The issue was that the peak
processing of the 195 pipeline was rarely achieved, most codes running
at half 195 peak ... having two such instructions streams had better
chance of keeping execution units fully occupied.

Note however, the past several generations of x86 server chips have
moved to RISC cores with hardware layer that translates x86
instructions into RISC micro-operations ... largely mitigating the
performance advantage that RISC processors have had over x86 server
chips. Part of this degree of innovation has been attributable to
competition between multiple vendors producing x86 chips.

Even the last two generations of mainframe have done something similar
... much of the performance increase from z10 to z196 and then from
z196 to ec12 has been attributable to increasing adoption of risc-like
out-of-order execution, branch prediction and speculative execution.
z10, 64 processors, 30BIPS, 469MIPS/proc
z196, 80 processors, 50BIPS, 624MIPS/proc (33% increase)
ec12, 101 processors, 75BIPS, 743MIPS/proc (20% increase)

recent articles are the next x86 server chip generation (that is about
to be released) will double the per processor performance (100%
increase) or 66BIPS/proc ... as well as 10-12 cores per chip. A two
chip might have 24cores (instead of 16) and 1584BIPS rating ... and
rumored four chip might have 48cores and a 3168BIPS rating (3168BIPS
would be the equivalent of forty-two 101 processor ec12 at 75BIPS &
743MIPS/proc).