dba@LISTS.DUDA.COM (David Andrews) writes:
One of you Old Ones (and I'm thinking of Shmuel in particular) correct
me on this, but didn't bare MVT have a horrendous core fragmentation
issue? My poor recollection is that HASP initiators essentially
reintroduced "partitions" to MVT to help beat that problem.

especially for long running jobs. Boeing huntsville had installed duplex
(2-processor SMP) 360/67 for tss/360 ... when tss/360 ran into deelivery
problems ... they would run it as two (partitioned) 360/65s under
os/360. Their workload was long-running 2250 (vector graphics) design
applications which had enormous storage fragmentation issues.

To address that they modified MVT (release 13) to build 360/67 page
tables and run in virtual memory mode ... there was no paging faults or
page i/o supported ... the virtual memory mode was just used as
countermeasure to the significant storage fragmentation problem (using
virtual address tables to re-arrange memory addresses to be contiguous).

Later I was brought in for a summer at boeing seattle as part of
setting up BCS (boeing computer services) ... and put in a cp67 360/67
(simplex) in the 360/30 datacenter at corporate hdqtrs (boeing field)
... which had primarily been doing corporate payroll. That summer, the
2-processor 360/67 was also moved to Seattle from Huntsville.

When 370 was initially announced, there was no virtual memory support
... and one of the IBM SEs on the boeing account wondered what was the
(virtual machine & virtual memory) cp67 path in 370. some 370s did have
a sort of virtual memory (a little analogous to current LPARs) ... used
for emulators ... which was a mode that had base/bound flavor of
(contiguous) virtual memory (i.e. virtual memory up to the "bound" limit
and all addresses were "relocated" by the "base" value). The boeing
account SE did a hack to cp67 that used the base/bound on 370s (pre
virtual memory) ... didn't do paging but would swap in/out whole virtual
machine address space.

also, somewhat analogous to the "preferred v=r guest" ... recent
reference (in the v=r case, the addresses were contiguous and the
virtual address was same as the real address):
http://www.garlic.com/~lynn/2010d.html#79 LPARs: More or Less?

dcartwright@YMAIL.COM (David Cartwright) writes:
At Monsanto Europe in Brussels about 1976 I wrote some mods to VM/370 to
defeat Shadow Page Tables for V=R machines so we could run MVS under
VM/370 without a crippling overhead. I sent that code out into the world on
some (Waterloo?) VM Mods tape, but my own copy got dumped in some move
down the years. Wish I had it now, it would go really nicely on Herc.

the stuff done on csc/vm ... that leaked out to at&t, had been about
the same time ... slightly eariler.

the design of the shadow page tables followed the semantics for the
hardware "look-aside buffer". the virtual machine has page tables that
translate virtual addresses to what it thinks are real
addresses. However, these are actually virtual addresses for the virtual
machine. So when VM runs a virtual machine ... in virtual memory mode
... it is actually run with "shadow page tables". Shadow page table
entries start out all invalid. The virtual machine immediately page
faults, vm then has to look at the (virtual) page tables (in virtual
machine) to translate from the virtual memory address to the virtual
machine address ... vm then looks it its page table to translate from
the virtual machine address to the real machine address. It is this
"real machine address" that is placed into the shadow tables.

The early, low & mid range 370s had a single STO stack ... everytime
there was a change in the virtual address space pointer ... the hardware
lookaside buffer was cleared and all entries invalidated. Early VM370's
shadow table operation had similar design, single STO stack, everytime
the virtual machine changed virtual address space pointer, all the
shadow page table entries were cleared and invalidated. Moving from SVS
to MVS significantly aggravated this ... because MVS was changing
virtual address space pointer at the drop of the hat (and vm370 was
going thru massive overhead constantly invalidating the shadow page
tables everytime MVS reloaded CR1).

370/168 had a 7-entry STO stack. There was a seven entry LRU queue of
the most recently used STO values. Each hardware look-aside buffer
entry had a 3-bit tag ... it was either one of the 7 currently valid
STO entries ... or invalid. MVS constant reloading/changing CR1 was
mitigated on real 168 with the 7-entry STO stack (loading new value
into CR1 didn't do anything if the value was already one of the seven
values in the STO staok). It wasn't until vm370 release 5 with sepp
option that vm370 finally shipped something equivalent to multiple
STO-stack (i.e. multiple shadow page tables being kept for a
single virtual machine ... to try and minimize having to
constantly clear all shadow page table entries every time MVS fiddled
with CR1).

The demise of FS saw a big need to get products back into the 370
product pipeline quickly. 3033 was such effort ... take the 370/168
logic and remap it to slightly faster chips. There was also some
activity to introduce some purely MVS microcode performance assists on
3033 ... one such involved cross-memory services. One of the issues with
3033 and cross-memory services ... was the 3033 still had the 370/168
design with 7-entry STO stack ... and cross-memory services was
significantly increasing the number of STOs being used ... overrunning
the seven entries ... with corresponding big increase in look-aside
buffer entry flushing (which netted out to worse performance; somewhat
analogous to the shadow page table flushing that VM was constantly being
forced to do).

LPARs: More or Less?

dcartwright@YMAIL.COM (David Cartwright) writes:
At Monsanto Europe in Brussels about 1976 I wrote some mods to VM/370 to
defeat Shadow Page Tables for V=R machines so we could run MVS under
VM/370 without a crippling overhead. I sent that code out into the world on
some (Waterloo?) VM Mods tape, but my own copy got dumped in some move
down the years. Wish I had it now, it would go really nicely on Herc.

the stuff done on csc/vm ... that leaked out to at&t, had been about
the same time ... slightly eariler.

the design of the shadow page tables followed the semantics for the
hardware "look-aside buffer". the virtual machine has page tables that
translate virtual addresses to what it thinks are real
addresses. However, these are actually virtual addresses for the virtual
machine. So when VM runs a virtual machine ... in virtual memory mode
... it is actually run with "shadow page tables". Shadow page table
entries start out all invalid. The virtual machine immediately page
faults, vm then has to look at the (virtual) page tables (in virtual
machine) to translate from the virtual memory address to the virtual
machine address ... vm then looks it its page table to translate from
the virtual machine address to the real machine address. It is this
"real machine address" that is placed into the shadow tables.

The early, low & mid range 370s had a single STO stack ... everytime
there was a change in the virtual address space pointer ... the hardware
lookaside buffer was cleared and all entries invalidated. Early VM370's
shadow table operation had similar design, single STO stack, everytime
the virtual machine changed virtual address space pointer, all the
shadow page table entries were cleared and invalidated. Moving from SVS
to MVS significantly aggravated this ... because MVS was changing
virtual address space pointer at the drop of the hat (and vm370 was
going thru massive overhead constantly invalidating the shadow page
tables everytime MVS reloaded CR1).

370/168 had a 7-entry STO stack. There was a seven entry LRU queue of
the most recently used STO values. Each hardware look-aside buffer entry
had a 3-bit tag ... it was either one of the 7 currently valid STO
entries ... or invalid. MVS constant reloading/changing CR1 was
mitigated on real 168 with the 7-entry STO stack (loading new value into
CR1 didn't do anything if the value was already one of the seven values
in the STO staok). It wasn't until vm370 release 5 with sepp option that
vm370 finally shipped something equivalent to multiple STO-stack
(i.e. multiple shadow page tables being kept for a single virtual
machine ... to try and minimize having to constantly clear all shadow
page table entries every time MVS fiddled with CR1).

The demise of FS saw a big need to get products back into the 370
product pipeline quickly. 3033 was such effort ... take the 370/168
logic and remap it to slightly faster chips. There was also some
activity to introduce some purely MVS microcode performance assists on
3033 ... one such involved cross-memory services. One of the issues with
3033 and cross-memory services ... was the 3033 still had the 370/168
design with 7-entry STO stack ... and cross-memory services was
significantly increasing the number of STOs being used ... overrunning
the seven entries ... with corresponding big increase in look-aside
buffer entry flushing (which netted out to worse performance; somewhat
analogous to the shadow page table flushing that VM was constantly being
forced to do).

from above:
The article discusses generally the methodology used by Tarnovsky to
reverse-engineer the security IC. It includes a painstaking electron
microscopic examination of the device (presumably with captured
images), followed by insertion of micro-probes into the data busses.
... snip ...

Los Gatos had pioneered use of electron microscope for chip analysis
in conjunction with debugging blue iliad.

from above:
"Unless you have an electron microscope, small conductive needles to
intercept the chip's internal circuitry, and the acid necessary to
expose it." Those are some of the tools available to researcher
Christopher Tarnovsky, who perpetrated the hack and presented his
findings at the Black Hat DC Conference earlier this month.
... snip ...

from above:
Christopher Tarnovsky, who works for Flylogic Engineering, employed
electron microscopy to achieve the feat. Tim Wilson reports, "Using a
painstaking process of analyzing the chip, Tarnovsky was able to
identify the core and create a 'bridge map' that enabled the bypass of
its complex web of defenses,"
... snip ...

There is the value of the attack (possibly fraud ROI for the
attacker). One of the issues is whether the security paradigm employs
a global shared-secret (that is in all chips) or a unique secret per
chip ... as well as whether the infrastructure is online or offline
(things that might possible limit fraudulent financial return as a
result of the attack).

This can show up as the security proportional to risk theme
... if it costs the attacker $100,000 to attack the system ... and the
possible fraud might be at least $100,000 ... then there may have to
be additional compensating processes to act as a deterrent.

I believe the above is relative close to the original vm/cms fortran
that I got not long later (although I don't have a copy so I can't be
absolutely sure).

the earliest PLI version I know of was done by somebody at STL ... who I
had provided a copy of the fortran version.

the folklore is that adventure use at STL reached such a level ... that
management had an edict that there was a 24hr amnesty period and then
anybody caught playing adventure during work would be severely
disciplined.

there was a period when they wanted to change all the internal vm370
logon screens to include note about use for work related activities
only. there was a big push by a few to get it changed to say for
management approved activities only .... small distinction ... but would
allow playing games as a management approved activity.

What is a Server?

charlesm@MCN.ORG (Charles Mills) writes:
Yes, I know everyone is defensive because a fad usually referred to as
"client/server" cost a lot of us our jobs about twenty years ago, but it's
time to move on. "Server" is not the enemy; server is a wonderful part of
computer system architecture.

in the very early days of of SNA (master/slave humongous number of dumb
terminal control infrastructure), my wife had co-authored peer-coupled
network architecture (AWP39, for slight reference, AAPN was AWP164) ...
which the SNA group appeared to find threatening.

she was having constant battles with SNA organization ... there would be
sporadic temporary truces where she was allowed to use anything within
the datacenter walls ... but SNA had to be used for anything crossing
the machine room walls (also there was much more focus on
tightly-coupled multiprocessor during the period ... so except for IMS
hotstandby, her architecture didn't see much uptake until sysplex).

When the PC was announced ... the communication groups use of terminal
emulation contributed significantly to early updake of PCs (could get a
PC for about the same price as 3270 terminal and in single desktop
footprint get terminal emulation as well as some local computing
capability, it was no brainer for businesses that already had 3270
terminal justification).

During this period, communication group acquired quite a large terminal
emulation install base ... but as PCs became more powerful ... there was
more and more requirement for more sophisticated operation.

Unfortunately the communication group was strongly defending their
terminal emulation install base. In this period, we had come up
with 3-tier architecture and were out pitching it to customer execs
(and taking some amount of flak from the communication group),
some amount of past posts mentioning 3-tier architecture
http://www.garlic.com/~lynn/subnetwork.html#3tier

The disk division was also trying to address the opportunity with
several products that would allow mainframe to play major roles in
distributed processing world ... however, as my wife had earlier
encountered ... the communication group would escalate to corporate
and roadblock the disk group efforts ... with the line that the
communication group owned anything that crossed the walls of the
machine room.

In the mean time, the terminal emulation paradigm was starting to
represent an enormous stanglehold on the mainframe datacenter ... and
data was starting to leak out to more user friently platforms. Disk
division was starting to see it (data leaking out of the mainframe
datacenter) creep up into (low) double digit loss per annum. some
topic drift, misc. past posts getting to play disk engineer in
bldgs 14&15
http://www.garlic.com/~lynn/subtopic.html#disk

At one point, a senior engineer from the disk division got a talk
scheduled at the annual worldwide communication group conference. He
started the talk out that the head of the communication group was going
to be responsible for the demise of the disk division (because the
stranglehold that the communication group had on the mainframe
datacenter and cutting it off from being used in more powerful ways
... resulting in accelerating rate that data was leaking out of the
datacenter to other platforms). misc. past posts mentioning terminal
emulation
http://www.garlic.com/~lynn/subnetwork.html#emulation

this was major factor in disk division out funding other kinds of things
... circumventing the communcation group politics ... funding somebody
else's product that would use mainframe disks in much more effective way
... side-stepped communcation group roadblocking announcements of disk
division products. recent reference:
http://www.garlic.com/~lynn/2010d.html#69 LPARs: More or Less?
http://www.garlic.com/~lynn/2010d.html#71 LPARs: More or Less?

a different trivial example was that the internal network was larger
than the arpanet/internet from just about the beginning until possibly
late 85 or early 86. a big explosion in the size of the internal network
was in late 70s and early 80s with lots of vm/4341 machines. Going into
the mid 80s, the customer mid-range market was moving to workstations
and large PCs (this can be seen in big drop off in both 43xx sales as
well as dec vax sales in the period). A big factor in the size of the
internet overtaking the internal network ... was that workstations and
PCs were appearing as network nodes (again because the increasing size
and power of the machines) ... while on the internal network, such
machines were still being restricted to "terminal emulation". misc.
past posts mentioning internal network
http://www.garlic.com/~lynn/subnetwork.html#internalnet

I got to play a lot with both 360/30 and then a 360/67 (front panel of
65 & 67 were essentially the same).

there was an incident with 370 before virtual memory was announced
where some virtual memory documents leaked to the press. there was a
"watergate-like" investigation ... and then they went around putting
serial numbers on the underside of the glass in all corporate copy
machines ... so all copied pages would carry the serial of the copy
machine that the copy was made on.

for Future System ... there was an idea to do softcopy "DRM" to minimize
the leakage of documents. The vm370 development group did a extra secure
version of vm370 that was used inside the corporation for future system
documents (only be able to "read" them on 3270 display).

One weekend, I had some dedicated machine time scheduled in the vm370
development group machine room ... and stopped by friday afternoon to
make sure everything was prepared. they took me into the machine room
... and made some reference that even I if I was left alone in the
machine room, I wouldn't be able to access the FS documents.

It was just a little too much, i made sure the machine was disabled for
all terminals for login ... and then did a one-byte patch to kernel
memory ... and then everything was available (aka the one-byte was in
the password checking routine ... so that regardless of what was typed
in, it would be accepted as valid password).

i made some reference to the only countermeasure (for somebody with
physical access) is completely disabling all mechanisms for compromising
the operation of the system.

the one i remember about DOJ case was on document retention
... including computer printouts. there was a period when POK was
starting to store overflow in offices ... and one bldg was loosing
something like five offices/week for document retention (walking down a
hall where everybody had been moved out and some of the offices
completely filled with boxes and piles of paper) ... and there was
starting to be issues with bldg. flr loading.

later i heard some reference to delivery of documents to DOJ ... and
they were scheduling freight trains for large number of boxcars filled
with paper.

we had done customer call on NLM in the 90s, NLM was sort of home-brew
CICS system from the 60s with BDAM ... a couple of the guys responsible
were still around and got to gosip about 60s (as undergraduate in the
60s, the univ had ONR grant to do computer catalog and was selected as
beta-test for original CICS, one of my tasks was debugging CICS).
misc. past posts mentioning cics &/or bdam
http://www.garlic.com/~lynn/submain.html#bdam

By the early 80s, NLM had reached point where web search engines got
nearly two decades later ... number of items so large that queries
returned hundreds of thousands items.

A new kind of interface originally done on Apple called GratefulMed
... that instead of getting back the actual responses, it got back the
number of responses. GratefulMed managed query strategies ... holy grail
was looking for something that returned more than zero and less than
hundred (boolean logic queries tended to be bimodel out around six or
seven tokens, switching from hundreds of thousands to zero).

mike@MENTOR-SERVICES.COM (Mike Myers) writes:
Back in the '60s, the Field Engineering Division took over first-level
support of OS/360, creating a new kind of Customer Engineer called a
Program Support Representative (PSR). Their primary role was to
examine a dump and determine if the problem was hardware or software
related. If hardware, they would turn it over to a hardware customer
engineer. If software, then they could attempt to fix or bypass the
issue with a zap, if feasible. If not, then they would report it to
development and try to work a temporary fix.

in the very early days of REX (before it was renamed and released to
customers) ... I wanted to demonstrate that it wasn't just another
pretty scripting language.

I selected that I would redo the (implemented in large number of
assembler LOCs) IPCS dump reader ... taking less than six weeks of my
time in under three months ... it would have ten times the function
and run ten times faster (some slight of hand to make the rex
implementation doing ten times the function, run ten times faster than
the assembler). part of the effort was gather signatures of failure
modes ... and build a library of automated scripts that would examine
dumps for all known, recognizable failure mode signatures.

I also made it capable of running against the live kernel as well as
patching kernel storage.

For some reason I was never able to get it released as replacement for
the standard product ... but at one point nearly all internal
datacenters and PSRs were using it.

Getting tired of waiting to get approval for it to ever be released, I
managed to get a presentation approved for BAYBUNCH ... where I went in
detail on how I did the implementation. Within three months after that
presentation ... there were at least two other implementations.

John.McKown@HEALTHMARKETS.COM (McKown, John) writes:
This just occurred to me. I wonder if I'm suffering from lack of
oxygen to the brain. But, as best as I can tell, SDSF is capable of
accessing the SPOOL files for a non-active JES2 system. At least as I
recall from the past, I did this. So I got to wondering. Suppose I
have a z/Linux system running in the same complex. Perhaps under
z/VM. It might be nice (FSVO nice), if I could logon to z/Linux and do
SDSF ad least to the extent of being able to read SPOOL files. Of
course, being a bit paranoid, I would only allow a READONLY access to
the DASD containing the SPOOL data. And there is always the specter of
security. There may be SPOOL files which I should not be able to even
READ (like payroll or HIPAA reports or ...). So this may be a stupid
idea. But the though is intriguing to me.

there had been significant enhancement of cms os/simulation
... including handling pretty much all (both r/o and r/w) os formated
disks & files (the cms os/simulation had been less than 64kbytes of code
... but there were comments that it was really cost-effective compared
to the size of the MVS os/simulation). However, this was before the
shutdown of the vm370 group and their move to POK to support mvs/xa
development (person left the company and remained in the boston area)
... and the enhancements appeared to evaporate.

A couple years later I was getting heavily involved in redoing assembler
implementations in more appropriate languages ... recent reference
to doing DUMPRX in rexx
http://www.garlic.com/~lynn/2010e.html#10 Need tool to zap core

I also redid a spool file system implementation in pascal running in
virtual machine. The los gatos lab had done the original mainframe
pascal implementation for vlsi tool implementation ... that pascal went
thru several product releases started with IUP before evolving into
program product.

That pascal was also used to implement the original mainframe tcp/ip
product (and suffered from none of the buffer length exploits that are
common in C language implementations). recent reference to that tcp/ip
implementation:
http://www.garlic.com/~lynn/2010d.html#72 LPARs: More or Less?

My full (spool file system) implementation moved the complete system
spool file implementation into virtual address space ... however, I
packaged subset of the routines as independent utilities for doing spool
file diagnostic (on normal systems). It is actually relatively
straight-forward activity.

Total aside, part of the issue was that the internal networking
technology ran thru the spool system (implemented as a service virtual
machine, or virtual appliance) ... and for various implementation
reasons would only get 20kbytes-40kbytes/sec sustained thruput (before
controller caches, etc). In HSDT, some past posts
http://www.garlic.com/~lynn/subnetwork.html#hsdt

HSDT was also having some vendors, on the other side of the pacific,
build some hardware. The friday afternoon before a vendor visit trip,
the communicationg group distributes an announcement for a new
"high-speed" discussion group in an internal forum ... with the
following definitions:
low-speed <9.6kbits
medium-speed 19.2kbits
high-speed 56kbits
very high-speed 1.5mbits

monday morning in vendor conference room on the other side of the
pacific:
low-speed <20mbits
medium-speed 100mbits
high-speed 200-300mbits
very high-speed >600mbits

shmuel+ibm-main@PATRIOT.NET (Shmuel Metz , Seymour J.) writes:
IBM was using BSL for OS/360, especially TSO. Nor was IBM the first to use
a HLL to write a commercial operating system; Burroughs wrote MCP in
various flavors of Algol.

some of the 7094/ctss people went to the science center on 4th flr of
545 tech sq (cp40/cms on a specially modified 360/40 with virtual memory
hardware, morphed into cp67/cms when standard virtual memory became
available on 360/67, invented GML ... which later morphs into SGML,
HTML, XML, etc). others went to 5th flr of 545 tech sq and implemented
multics ... all done in PLI. multics references:
http://www.multicians.org/multics.html

multics was the first to release commercial relational database product
in 1976.

air force did a security study of multics in early 70s. a few years ago,
ibm research did a paper revisiting the mutlics security study
... recent post in this mailing list on the subject:
http://www.garlic.com/~lynn/2010b.html#97 "The Naked Mainframe" (Forbes Security Article)

search engine history, was Happy DEC-10 Day

Michael Wojcik <mwojcik@newsguy.com> writes:
As for programming languages that were actually meant to be used for
real work, there are many sites devoted to determining which was
worst, and those show plenty of languages substantially worse than RPG
by any reasonable measure. (Obviously "worst programming language" is
ultimately a subjective question, but there are grounds for agreement
by reasonable observers.)

MUMPS seems to get a lot of votes as one of the worst languages.

but the language & system has hung on for quite awhile at various
medical locations ... like VA hospitals; offers arbitrary relations
between any two items (many-to-many) ... something much more difficult
to achieve in RDBMS.
http://en.wikipedia.org/wiki/MUMPS

when we were doing ha/cmp ... we subcontracted a lot of work to small
startup (grew to couple hundred people) that was a couple people that
had been working at project athena ... and one of the people that had
been involved cp40 & cp67 (later was head of part of FS and then
retired) and at the time was also director of MGH/Harvard medical school
dataprocessing. misc. past posts mentioning ha/cmp
http://www.garlic.com/~lynn/subtopic.html#hacmp

Anne & Lynn Wheeler <lynn@garlic.com> writes:
some of the 7094/ctss people went to the science center on 4th flr of
545 tech sq (cp40/cms on a specially modified 360/40 with virtual memory
hardware, morphed into cp67/cms when standard virtual memory became
available on 360/67, invented GML ... which later morphs into SGML,
HTML, XML, etc). others went to 5th flr of 545 tech sq and implemented
multics ... all done in PLI. multics references:
http://www.multicians.org/multics.html

for additional language trivia ... the boston programming center was on
the 3rd flr (until cp67 development group split off from the science
center, morphed into vm370 development group and absorbed the 3rd flr
along w/all of the boston programming center, this was before outgrowing
the 3rd flr and moving into the bldg. out in burlington mall that had
been vacated when SBC was transferred to CDC).

from above:
She was also a member of the subcommittee which created COBOL. Sammet
was president of the ACM from 1974 to 1976.
... snip ...

and BPC also had produced CPS ... online interactive system that
supported PLI and BASIC that ran under os/360. CPS also had a special
microcode performance assist for the 360/50 (when BPS was absorbed by
vm370 group, they did a version of CPS that ran in CMS ... somewhat akin
to the work that was done on apl\360 for cms\apl; somewhere in boxes I
have hardcopy of the CPS/CMS description).

"C" had been at the science center in the 60s ... and was no longer
there when I joined the science center in the 70s (was then also
director of MGH/Harvard medical school dataprocessing).

"L" had been with ACIS over at project athena ... ibm & dec both funded
project athena equally ... and both had an assistant director at project
athena (for a time, the ibm assistant director at project athena was CAS
... who had invented the compare&swap instructure when at the science
center), we periodically did audit/review of Project athena stuff (one
week we were there, sat thru working out how kerberos would implement
cross-domain support; i've recently mentioned that couple years ago sat
thru a SAML implementation presentation and the SAML message flows
looked identical to kerberos cross-domain). I have some memory that he
was involved in the X-window 8514 display driver support in AOS ... both
the PC/RT and the romp co-processor board that went into PS2.

"M" had previously been at the science center.

Science center had moved from 545tech sq ... down the street to 101
Main. When the science center was dissolved ... CLaM moved into the
space vacated by the science center.

I'd commented in the 80s about tripping across some unix scheduler code
that looked very similar to code in "release 1" cp67 that I had
completely replaced as undergraduate in the 60s ... possibly both
inherited from something in CTSS (cp67 directly from ctss, and unix
possibly indirectly by way of multics, DR may comment on this over in
a.f.c.)

idea was to do a micro-kernel base ... in higher level language
... like some flavor of pascal ... small focused effort possibly
starting with some existing assembler base and recoding into another
language.

cp67 had started pretty clean micro-kernel. one of the issues with
vm370 and later flavors was that there were increasing number of
traditional operating system developers working on the platform that
it starting to take on characteristics of traditional operating system
bloat. Hardware microcode engineers (as develepors ) always seemed to
be better at preserving microkernel paradigm.

So, one of the things that happened when tss/360 was decommiitted was
that the support/development group was radically reduced ... which
tended to reflect in reduction in implementation bloat (aka at one
point tss/360 supposedly had something approaching 1100-1200 people at
a time when cp67/cms had 11-12 people). After awhile tss/370 was
starting to take on some interesting characteristics. Something that
further contributed to this was a project for AT&T to do a
stripped down tss/370 kernel (SSUP) with unix api layered on top (also
presented at mar82 conference)

In any case, there was an activity to compare the size/complexity of
tss/370 ssup against vm370 ... as candidate for new kernel base
... converted to higher level language. small piece of the analysis
(vm/sp against tss/370 kernel).

At one point there was a corporate get together in kingston plantsite
cafeteria ... it was supposed to be a "VM/XB" meeting (play on next
generation after XA); the cafeteria people misheard and put up sign
that said "ZM/XB meeting". The problem was then corporation decided it
was going to be strategic ... and then grew into a couple hundred people
writing specs (a FS moment?) ... an old email
http://www.garlic.com/~lynn/2007h.html#email830527in this post
http://www.garlic.com/~lynn/2007h.html#57

Anne & Lynn Wheeler <lynn@garlic.com> writes:
idea was to do a micro-kernel base ... in higher level language
... like some flavor of pascal ... small focused effort possibly
starting with some existing assembler base and recoding into another
language.

One of the two people responsible (original mainframe pascal) then
leaves to do a startup for clone 3270 controller; they were figuring
that TSO response was so horrible (especially compared to CMS) that they
could try and offload some amount of the TSO operations into the
controller ... to try and improve the appearance of interactive response
... selling into the mainframe TSO market (use to drop in periodically
to see how they were doing). It never caught on ... and the person then
shows up as VP of software development at MIPS. After SGI buys MIPS, he
shows up as general manager of the business unit responsible for JAVA
(pretty early in JAVA life).

What's with IBMLINK now??

John_J_Kelly@AO.USCOURTS.GOV (John Kelly) writes:
Here's my response form IBM FeedBack about it. I find that it goes away
after a while but happens mostly with Firefox. When I get it, I go to IE
and get in OK. I had an offline line email from someone else who's had the
problem and they accepted the site and apparently got in.

it isn't so much that it is an invalid certificate ... it is an
incorrect certificate. the whole point is that the domain name in the
certificate is supposed to correspond to the URL that the browser is
using. browsers have some rules about wild-card (fuzzy) match between
what is in the certificate and what the URL they are using ... in
general, domain names have to EXACTLY match the URL ... or for
wild-card, the trailing part (in the certificate) has to match the
corresponding field in the URLs used by the browser.

long ago and far away, we were brought in to consult with small
client/server startup that wanted to do payment transactions on their
server ... and they had invented this technology called SSL that they
wanted to use (the result is now frequently called electronic commerce).
As part of the effort, we had to do some in-depth review of the protocol
and browser operation ... as well as business processor walkthrus with
some of the new operations calling themselves Certification Authorities.
misc. past posts about ssl digital certificates
http://www.garlic.com/~lynn/subpubkey.html#sslcerts

it turns out that there were several security assumptions about how all
the pieces actually fit together and worked ... in some number of cases,
some of those security assumptions were almost immediately violated
(which can be considered at the root of some number of current
compromises).

one issue is that people may be actually going to different
machines/gateways

I've seen it with "ibm greater connection" ... there are various
webhosting services ... with multiple physical locations around the
world ... where connection is directed to the "closest" facility. Lots
of big coporations will outsource some part of their operation to such a
facility (in part because they have these massive operations at several
places around the planet).

at various times when something is going on ... i've had SSL
certificates come back from the underlying webhosting facility
... rather the SSL certificate for the ibm server that I'm trying to
connect to.

there are lots of tricks played mapping URL to multiple different
physical pieces of hardware .. (like load balancing, server with the
least number of internet hops, etc). sometimes maintenance on all these
pieces can get out of sync (as well as the setup for the alias
identities for possible different physical boxes).

complicating this was item a month or so ago about reports of attacks on
some number of (well-known) SSL servers.

The original way that VM370 did shared segments was all thru the "IPL"
command. A special CMS kernel was saved that included an image of
APL. The users virtual machine was setup to automatically "IPL" cmsapl
... at logon time.

Some of HONE configurators got some performance enhancement by being
recoded in FORTRAN ... requiring HONE to drop out of APL into native
CMS, run the fortran application and then resume APL. This wasn't
possible with the "IPL cmsapl" scenario .... w/o requiring the user to
manually execute all of the associated commands.

part of the page-mapped stuff was being able to have a new mechanism
for shared pages ... having IPL CMS ... with the cms image separate
from the shared pages for APL executable. HONE installed this ... to
facilitate automatically being able to drop out of APL, execute
fortran application and then resume APL execution (transparent to the
end user). It wasn't absolutely necessary to have pam formated disks
to use this ... because the changes would also work with normal
"saved" systems (however, all the changes were required, even if there
weren't CMS page-mapped formated filesystems).

Later the development group picked up a lot of the CMS changes (except
for the paged-mapped filesystem changes) for additional shared stuff
(shared editor, shared exec, misc. other stuff) ... and very small
subset of the CP changes (w/o any of the paged-mapped filesystem
changes) ... and released as part of VM370 release 3 DCSS.

The original way that I added editor, exec and other stuff to
additional shared segment was using a facility that the segment could
be floated/relocated to any available place in the virtual address
space. This required that the code be address/location independent
(first thing that virtual CMS IPL did was move the additional shared
segment(s) to available place).

Old email about modifying BROWSE & FULIST to reside in shared segment
... as well swizzling addresses so they were location independent.

I have modified BROWSE & FULIST3 to go into the shared relocating
nucleus. One of the fall outs of that is FULIST3 can now call DMSINA
a(ABBREV) directly rather than duplicating the code within the
module. My changes are not very clean. I moved all variables to the
bottom of the csect, duplicated them in the COMSECT dsect and then
removed all labels from the copy in the csect. AT initialization time
I do an MVCL from the csect to dsect to initialize the area. The
relocating shared segment(s) are initialy positioned at x'30000' at
IPL time, prior to relocation.
... snip ... top of post, old email index.

When the development group picked up the changes for release 3 DCSS
... they eliminated the "floating" characteristic. That change
required an installation have multiple different copies of CMS
additional shared segments ... all created for (different) "fixed"
virtual address location. The original implementation allowed
everybody to share the same exact copy ... regardless of the size of
the particular virtual machine. misc. past posts mentioning
address location independent code:
http://www.garlic.com/~lynn/submain.html#adcon

search engine history, was Happy DEC-10 Day

Charles Richmond <frizzle@tx.rr.com> writes:
Do *not* forget LISP and APL. These two languages were *not* used so
much for "production" work, but they were developed with the 60's time
frame.

a lot of apl was used for business modeling and what if things ... and a
lot of stuff that today that are done with spreadsheets.

apl\360 was closed interactive environment with its own terminal
support, dispatching, swapping, etc ... done in real memory system and
typically 16kbytes to 32kbytes workspaces that swapped as integral
unit. the science center migrated that to cms for cms\apl ... was able
to discard all the stuff but the interpretor ... and open the workspace
size up to virtual memory (allowing more substantial real-world
applications to be implemented), and adding APIs to system functions
(like file read/writing). One of the things that had to be re-done from
apl\360 to cms\apl was storage management; apl had dynamically allocated
new storage on every assignment ... until it exhausted the workspace
... and then would do garbage collection and start over; this had
disastrous page thrashing characteristics in large virtual memory
operation.

The internal HONE system was world-wide virtual machine based, online
sales&marketing support ... with majority of the applications written in
apl. In the early 80s, just the US-based HONE system had userids
approaching 40,000. misc. past posts mentioning HONE
http://www.garlic.com/~lynn/subtopic.html#hone

charlesm@MCN.ORG (Charles Mills) writes:
It also might be mentioned that there was an incentive to develop a
quick-and-dirty DOS/360 that came from the shortage of machine time on the
7094 simulators (being used to develop OS/360) versus the amount of 360
hardware that was coming out of the factory but unusable due to the lack of
an operating system.

there was less difference between 360/370 w/o virtual memory and 370
with virtual memory.

for 370 virtual memory there was (distributed) development project
between the science center and endicott to modify cp67 to support 370
virtual machines with virtual memory (i.e. 370 had some new instructions
and the format of the virtual memory tables and definition of control
registers were different).

since the cp67 service had non-employee users (students and others from
various institutions in boston area, BU, MIT, Harvard, etc), the project
went on with virtual cp67 in a virtual machine (to help keep information
about virtual memory to leaking to the unauthorized). then to test the
virtual 370 operation, another version of cp67 was modified to conform
to the 370 architecture (instead of 360/67 architecture). A year before
first engineering 370, the following was in general use
360/67 real machine
"cp67l" running on real hardware
"cp67h" running in 360/67 virtual machine w/virtual memory
"cp67i" running in 370 virtual machine w/virtual memroy
"cms"

when first engineering 370 with virtual memory hardware became
operation, a version of "cp67i" was booted on the machine to test its
operation. The first boot failed, and after some diagnostic, it turned
out that the engineers had reversed the definition of two of the new
opcodes; the cp67i kernel was quickly patched (for the incorrect
opcodes) and cp67i came up and ran.

things were a little different for MVT->SVS. There were minimal changes
to MVT to build a single virtual address table ... and handle page
faults and paging operations (not a whole lot of difference between
running MVT in a large virtual machine ... with a minimum amount of
"native" virtual memory support). The biggest change for MVT->SVS was
that the application channel programs passed via EXCP/svc0 had to be
translated, aka a shadow copy of the CCWs built that had real addresses
in place of the virtual addresses (along with the associated
pinning/unpinning of the virtual pages to real addresses). To do this, a
copy of the the corresponding code from CP67 was borrowed (i.e. when
cp67 does virtual machine channel programs it had to scan the virtual
channel program, creating a "shadow" copy with real address in place of
virtual addresses).

When 370s with virtual memory hardware started being deployed
internally, "cp67i" was the standard system that ran on them for a long
time (or "cp67sj" ... which was "cp67i" with 3330 & 2305 device support
done by a couple engineers from san jose).

something different was done for 3081 and TPF. Originally 308x was
never going to have non-multiprocessor machine ... however, at the
time, TPF didn't have multiprocessor support. The mechanism to run TPF
on 3081 was to run vm370 with TPF in a virtual machine. In some
number of TPF 3081, that tended to leave the 2nd 3081 processor
idle. The next release of vm then had some special modifications to
improve TPF thruput ... it added a bunch of multiprocessor chatter
overhead that allowed parts of vm kernel execution to be run
asynchronous with TPF operation ... this drove up virtual
machine overhead by about 10% ... but increased overall TPF
thruput ... by having the overhead being executed on the otherwise
idle processor. The problem was that (additional multiprocessor
overhead chatter) change degraded the thruput of all the
multiprocessor customers (that normally ran with all processors fully
loaded).

the straight-forward would be to leave out the "processor 1" frame ...
but the "processor 0" was built at the top of the box ... and there was
some concern that the straight-forward solution would leave the box
top-heavy.

it wasn't just branch people that used HONE, there were also
installations for hdqtrs. When EMEA hdqtrs moved from the states to
Paris, I was asked to help with the EMEA hdqtrs HONE installation in
Paris.

HONE had started out as several (virtual machine) cp67 datacenters in
the US to give SEs "hands-on" experience with guest operaitng systems
after the 23jun69 unbundling announcements (including charging for SE
time; a lot of SE experience had been hands-on at customer accounts;
nobody figured out how not charge for learning time). misc. past posts
mentioning unbundling
http://www.garlic.com/~lynn/submain.html#unbundle

a subset of those changes (just the non-virtual memory, 370
instructions) were also installed on the HONE cp67 systems, for SEs to
try guest operating systems released for 370 machines.

Science center had also done a port of apl\360 to cms for
cms\apl. apl\360 installations had been typical 16kbyte-32kbytes
workspace ... science center had to do a lot of work to open that up
to large virtual memory operation (standard apl\360 workspace storage
management resulted in severe page thrashing in virtual memory
environment). science center also added an API to access systems
services (like file read/write) ... which causes lots of heartburn
with the APL purists.

The combination of significantly larger workspace sizes and being able
to do things like file I/O ... allowed much larger real-world
applications to be implemented and run. One of the very early flavor
os this was the business planning people in Armonk doing an APL
business model that they would run on the Cambridge cp67 system. This
required the business people loading the most sensitive and valuable
corporate assets on the cambridge system (detailed customer
information) which required extremely high level of security
(especially with numerous students from places like BU, MIT, and
Harvard also having access to Cambridge system).

Similar APL applications were being deployed on HONE and came to
dominate all activity and the original virtual guest operation for SEs
disappeared.

After VM370 became available, HONE migrated from cp67 (& cms\apl) to
vm370 (& apl\cms done at the palo alto science center). The APL
purists also came out with "shared-variable" paradigm as an APL-way of
doing system services APIs (as opposed to what had been shipped in
cms\apl).

In the mid-70s, the HONE datacenters were consolidated in silicon
valley (in bldg. across the back parking lot from the palo alto
science center). PASC had also done the 370/145 APL microcode assist
for apl\cms ... which gave APL about a factor of ten times performance
boost (APL applications on 145 ran about as fast a on 168 w/o
microcode assist). HONE had multiple 168s for their operation and
looked at possibly being able to move some number to 145s (with
microcode assist). The problem was that a lot of the HONE APL
applications were using the larger virtual memory and file i/o
capability ... which also required the added channels and real storage
of 168s.

So one of the things I started working on during 1975 (after having
done move from cp67 to vm370) was a number of projects supporting
multiprocessors. In addition to the other bits and pieces that was
leaking out ... eventually it was decided to ship multiprocessor
support in vm370 release 4.

There was a problem. While the original unbundling managed to make the
case that kernel software should still be free ... that decision was
starting to shift after the clone processors managed to get market
foothold during the Future System era .. FS was going to completely
replace 360/370, during the era 370 product pipelines were allowed to
go dry ... and then FS was killed and there was mad scramble to get
stuff back into 370 pipeline; misc. past posts mentioning FS
http://www.garlic.com/~lynn/submain.html#futuresys

some of that accounts for picking up bits and pieces of my stuff in
csc/vm, for vm370 release 3 (like the cp & cms changes for DCSS). it
also contributed to decision to take my resource manager (lots of
stuff that had even already shipped in cp67 product and was dropped in
the cp67->vm370 simplification) and release as separate kernel
product. However, resource manager got picked to be guinea pig for
starting to charge for kernel software and I had to spend a bunch of
time with business people and lawyers about kernel software charging.
http://www.garlic.com/~lynn/submain.html#unbundle

One of the issue was I included a whole lot of stuff in the resource
manager ... including a bunch of kernel structure that the
multiprocessor design&implementation required. now one of the policies
for kernel software charging (during the transition period) was that
direct hardware support would still be free and "free software"
couldn't be shipped that required priced software as a
prerequisite.
http://www.garlic.com/~lynn/subtopic.html#fairshare

The resource manager then represented quite a problem for shipping
multiprocessor support in vm370 release 4. Finally it was decided to
move 90% of the lines of code ... that had been in the charged-for
resource manager ... into the free kernel (since it was required for
shipping "free" multiprocessor support) ... but leaving the price of
the resource manager the same.
http://www.garlic.com/~lynn/subtopic.html#smp

In any case, one of the things I did for HONE was provide them with
release 3 version of production multiprocessor support (well before
release 4 of vm370 was shipping) ... so they could start upgrading
their 168s to multiprocessor machines (needing all the computer power
they could get) ... they were already putting together a max-ed out
loosely-coupled single system image configuration in the consolidated
US HONE datacenter (with fall-over and load-balancing; possibly
largest single-image complex in the world at the time).
http://www.garlic.com/~lynn/subtopic.html#hone

whither NCP hosts?

Jonno Downes <jonnosan@gmail.com> writes:
So I reckon would be lots of fun to hack around with those old RFCs
(i.e. get a C64 or something to talk Telnet over NCP), but obviously
that only works if there's some endpoints to connect to. Does such a
beast exist? Is there anything around (real or emulated) that talks
NCP ? It seems like it would be possible to hook up some emulated IMPs
talking to each other via an IP VPN, so I can't imagine I'm the first
to have considered this, and if anyone has actually done it, I figure
the may well be part of this newsgroup, so....

there were some number of NCPs during the 70s. The program that ran in
the 3705 telecommunication front-ends for SNA was also called NCP (the
term "network" in both SNA and NCP is possibly some term inflation
... since the host was VTAM ... aka dumb terminal controller).

On the ARPANET ... there was the IMP-to-IMP stuff that was the "network"
... and there was a "host protocol" ... that the hosts talked to the
IMPs (some ambiguity as to the size of the arpanet/internet at the time
of the 1/1/83 switch-over to internetworking protocol; talking about
"255" ... which possibly was the number of hosts ... because there were
other referrences placing the number of IMPs around 100; aka multiple
hosts attaching to single IMP).

One of the tcp/ip things in FTP is its use of two channels ... separate
for control & data ... attempting to map the host-to-host implementation
into the interneworking implementation (there is an RFC about problems
trying to do the FTP host-to-host out-of-band control stuff in an
internetworking environment).

SHAREWARE at Its Finest

jim.marshall@OPM.GOV (Jim Marshall) writes:
P.S. Getting the first of a new generation of IBM computer, the IBM 3032,
made us a showplace besides being in the Pentagon. But 6 months later IBM
shipped the first IBM 3033 to Singer up in New Jersey, we were obsolete and
never got the IBM 3032-AP/MP we hoped would come.

in the wake of demise of future system, there was mad-rush to get stuff
back into the 370 product pipelines.

they took the 158 integrated channel microcode and split it into
separate (dedicated 158 engine) box for the 303x channel controller.

the 3031 then becomes a 158 with just the 370 microcode and a
2nd/separate 158 with just the integrated channel microcode (with same
158 engine no longer needed to be shared with executing both 370
microcode and integrated channel microcode, 3031 thruput becomes almost
as fast as 4341).

the 3032 then becomes a 168 with 303x channel director (158 with just
the integrated channel microcode) instead of the 168 external channel
boxes.

the 3033 started out being the 168 logic mapped to chips that were
20percent faster. The chips also had a lot more circuits per chip
... initially that would be unused. In the 3033 product cycle there was
some effort to redesign pieces of 168 logic to take advantage of the
higher cicuit density ... and finally got the 3033 up to 50% faster than
168.

there was eventually a 3033N sold that was crippled to be slower than
168/3032 but could be field upgraded to full 3033.

they were charged for kernel add-ons to the free vm370 release 6 base.

as part of 23jun69 unbundling announcement, there was start of charging
for application software (somewhat as the result of various litigation),
however they managed to make the case that kernel software was still
free.
http://www.garlic.com/~lynn/submain.html#unbundle

with the demise of future system project ... there was mad rush to get
items back into the 370 product pipeline ... also there have been claims
that it was the distraction of the FS product (and lack of products)
that allowed clone processors to get foothold in the marketplace. misc.
past FS posts
http://www.garlic.com/~lynn/submain.html#futuresys

part of the rush to get things back into 370 product pipeline
contributed to picking up various 370 things that I had been doing
during the FS period for product release. some recent discussion
(as well as "Unbundling & HONE" post referenced above)
http://www.garlic.com/~lynn/2010e.html#21 paged-access method

One of the things was the resource manager. However, possibly because of
the foothold that the clone processors were getting in the market, there
was decision to start charging for kernel software ... and my resource
manager was selected as guinea pig. Also discussed here
http://www.garlic.com/~lynn/2010e.html#25 HONE Compute Intensive

which shipped in the middle of the vm370 release 3 time-frame. As
mentioned in the above ... vm370 multiprocessor support was to go out in
vm370 release 4 ... but design/implementation was dependent on lots of
code in the resource manager. The initial policy for kernel charging was
that hardware support would still be free (including multiprocessor
support) and couldn't have prerequisite that was charged for ... as in
my resource manager. The eventually solution was to move approx. 90% of
the code from the resource manager into the "free" base ... but leave
the price of the "release 4" resource manager the same (as release 3,
even tho it was only about 10% of the code).

for vm370 release 5, the resource manager was repackaged with some
amount of other code ... including "multiple shadow table support"
... discussed here:
http://www.garlic.com/~lynn/2010e.html#1 LPARs: More or Less?

and renamed sepp (i.e. sepp was the charged for software that fit
ontop of vm370 release 5). there was a lower price "bsepp" subset.

So neither SEPP nor BSEPP were free ... they were both kernel add-on
software for the free vm370 base, that was charged for.

When it came time for vm370 release 7, there was transition to charging
for all kernel software ... and everything was merged back into single
charged for kernel renamed VM/SP (and there was no more "vm370"
releases).

vm370 release 6 being free has been made available for Hercules (370
virtual machine on intel platform) ... but not any of the charged for
software ... my resource manager, SEPP, BSEPP, vm/sp, etc.

filenames are usually descriptive ... but not alwas. There is also
copies of VMSHARE 291 on YKTVMV, KGNVM8, and KGNVMC, and the HONE
machines (see $VMSHAR QMARK on the disk). Tymshare provides a keyword
search mechanism if you are using the "real" vmshare system on their
machine. Closest thing we have to that is HONE has the files under
CMS/STAIRS and on SJRVM3 the files are available under EQUAL. Both
STAIRS and EQUAL provide keyword search mechanisms of the files.

Brute force is to get a userid with direct access to the files and use
SCANFILE to do a complete search of the whole disk using specific
tokens.
... snip ... top of post, old email index.

QMARK was an internal convention ... something akin to README
files. EQUAL was an internally developed computer conferencing system.

tony@HARMINC.NET (Tony Harminc) writes:
I don't know how similar the 158 and 155 really were (certainly very
different front panel implementations), but it's interesting that the
303x got the microcoded channels, rather than the clunky but rock
solid 28x0 hardwired ones.

i had been doing some timing tests on how sort a "dummy" record that was
needed to doing track head switch (seek head) between two rotationally
consecutive records on different tracks (involving both channel
processing latency and control unit processing latency). 168, 145, 4341
could succesfully do the switch with shorter block than 158, 303x, and
3081. There were also some number of OEM disk controllers that had lower
latency and required smaller dummy record ... less of rotational delay
to cover the processing latency to process seek head operation.

The 3830 disk controller was horizontal microcode engine that was much
faster than the vertical microcode engine (jib-prime) used in the 3880
disk controller. To compensate for the slower processing and also handle
higher data rates ... there was dedicated hardware for data flow
... separate from the control processing done by the jib-prime.
Data-streaming was also introduced (no longer having to do handshake for
every byte transfer (help both with supporting 3mbyte transfers at the
same time allowing max. channel length to be increased from 200ft to
400ft).

There was requirement at the time that 3880 had to be within +/-
5percent of 3830 ... they ran some batch operating performance tests in
STL and it didn't quite make it ... so they tweaked the 3880 control to
present operation complete interrupt ... before 3880 had actually fully
completed all the operations (to appear to be "within" five percent of
3830). Then if 3880 discovered something in error in its cleanup work
... it would present an asyncrhonous unit check. I told them that was
violation of the architecture ... at which time they dragged me into
resolution conference calls with the channel engineers in POK. Finally
they decided that they would saved up the unit check error condition
... and present it as cc=1, csw-stored, unit check on the next sio
("unsolicited" unit checks were violation of channel architecture).

so everybody seems to be happy. then one monday morning the bldg. 15
engineers call me up and asked me what i did over the weekend to trash
the performance of the (my) vm370 system they were running. I claimed
to have done nothing ... they claimed to have done nothing. Finally it
was determined that they had replaced a 3830 that they were using with
string of 16 3330 cms drives ... with a 3880. While their batch os
acceptance test didn't have a problem ... i had severely optimized the
pathlength for i/o redrive (of queued operations) after an i/o
completion. My redrive sio was managed to hit the 3880 with the next
operation while the 3880 was still busy cleaning up the previous
operation (they had assumed that they could get it done faster than
operating system interrupt processing). Because the controller was still
busy, I would get cc=1, csw-stored, sm+busy (aka controller busy). The
operation then would have to be requeued and go off to look for
something else to do. Then because the controller had signaled SM+BUSY,
it was obligated to do a CUE interrupt. The combination of the 3880
slower processing and all the extra operating system processing gorp
... was degrading their interactive service by severe 30percent (which
was what had prompted the monday morning call).

While 3880, the batch performance "acceptance" tests had originally
eventually passsed ... however somewhat related the the earlier 15min
MTBF issue ... much nearer 3880 product ship, engineers had developed a
regression test suite of 57 expected errors. old email that for all the
errors in the regression test, mvs required reboot ... and in 2/3rds the
cases, there was no evidence of what required mvs to be rebooted.
Recent post mentioning the issue
http://www.garlic.com/~lynn/2010d.html#59 LPARs: More or Less?

shmuel+ibm-main@PATRIOT.NET (Shmuel Metz , Seymour J.) writes:
Wasn't VM/SP also an add-on to VMF/370 R6? I know for sure that MVS/SP was
an addon to OS/VS2 3.8 + SU64 et al, and I vaguely recall that it wasn't
until around ESA 4 that the free MVS base went away from the packaging.

from her VM history document at above (VM/SP becomes a new base with
all the sepp/bsepp stuff):

At the same time, we started seeing the results of IBM's new
commitment to VM. VM System Product Release 1 came out late in
1980. VM/SP1 combined all the [B]SEPP function into the new base and
added an amazing collection of new function (amounting to more than
100,000 lines of new code): XEDIT, EXEC 2, IUCV, MIH, SUBCOM, MP
support, and more.
... snip ...

also from above:

VM/SP1 was just amazingly buggy. The first year of SP1 was simply
chaotic. The system had clearly been shipped before it was at all well
tested, but the new function was so alluring that customers put it
into production right away. So, CP was crashing all over the place;
CMS was destroying minidisks right and left; the new PUT process was
delaying the shipment of fixes; and tempers were flaring. When the
great toolmaker Jim Bergsten produced a T-shirt that warned
VM/SP is waiting for you, his supply sold out immediately.
... snip ...

re: cia vm/sp experience; YKT has seen similar performance problems
going to SP. They have isolated a major compenent of it tho. It turns
out that SP increased the default terminal I/O control block size for
all terminals in a redesign for new terminals. This increase in size
resulted in the block no longer being in the "sub-pool" size range. A
storage request for for a sub-pool size block can be satisfied in less
than 30 instructions. A non sub-pool size block is allocated using the
a best fit free storage allocation algorithm (i.e. everything on the
chain must be scanned). A large system can easily have 1,000 blocks or
more on the chain. It takes 5-6 instructions per block. Re-defining
sub-pool sized blocks in DMKFRE (modification & re-assembly) resulted
in almost returning overhead to release 6 days. There are other
significant SP degradation hits that have to do only with real AP/MP
operations.
... snip ... top of post, old email index.

the above is related to custom MP changes made to support TPF ... but
caused performance degradation for all other customers ... which was
then somewhat offset/masked by improvement/changes in 3270 i/o (modulo
the storage subpool "bug") ... which didn't help at the above customer
since they weren't using 3270s ... but lots of ascii glass teletypes.

The reason I'm sending this note to you is due to your reputation of
never throwing anything away that was once useful (besides the fact
that you wrote a lot of CP code and (bless you) DUMPRX).

I've discussed this with my management and they agreed it would be
okay to fill you in on what the 3090 PC is so I can intelligently ask
for your assistance.

The 3092 (3090 PC) is basically a 4331 running CP SEPP REL 6 PLC29
with quite a few local mods. Since CP is so old it's difficult, if not
impossible to get any support from VM development or the change team.

What I'm looking for is a version of the CP FREE/FRET trap that we
could apply or rework so it would apply to our 3090 PC. I was hoping
you might have the code or know where I could get it from (source
hopefully).

The following is an extract from some notes sent to me from our local
CP development team trying to debug the problem. Any help you can
provide would be greatly appreciated.
... snip ... top of post, old email index.

jim.marshall@OPM.GOV (Jim Marshall) writes:
I have been following some discussions on Adventure, StarTrek, and other
games around back in the 20th Century. If you look on the CBT tape you will
find a number of Computer games from back in the 1970s; WUMPUS,
RoadRace, Eliza, Lunar Lander, etc. Thought it was about time to clue folks in
on some events. Back in the 1970s the Air Force assigned me to the Pentagon
to work on an IBM 360-75J & OS/MVT/HASPIII. Along the way we were
blessed with the first IBM 303X shipped; namely a 3032 serial number 6. Along
with it came all the DASD and tape plus an IBM 3850 MSS (35GB) with a bunch
of Virtual IBM 3330 (100MB) drives.

he had references to doing stint in 1970 running spook base. One of his
biographices make reference to his stint at spook base ... with comment
that it was a $2.5B windfall for IBM.

recent post reference the disk division had datacenter in bldg.26
running numerous MVS machines ... however, it ran out of space for all
the computing demand for all the tools. vm/4341s were starting to go
into every nook&cranny ... w/o needing all the datacenter infrastructure
required by the big iron. they started looking at doing something
similar with mvs on 4341 ... however, one of the issues when looking at
capacity planning was the low "capture ratio" that really messed up
their numbers (MVS and VTAM pathlength by itself would nearly consume
the 4341).
http://www.garlic.com/~lynn/2010d.html#66 LPARs: More or Less?

Part of the problem was that the applications were using some MVS system
services not supported by CMS (CMS having about 64kbytes of o/s
simulation code). However, Los Gatos lab found that with about another
12kbytes of o/s simulation glue code ... these large applications moved
over (application being able to use nearly all of the processor)

funny thing about the wording in the above email was that neither the
person writing the email nor his immediate management seemed to have
realized that I had helped the manager that started the vm service
processor for 3090 (i.e. turnover and/or transient nature of the
positions).

the issue with the 3081 service processor was that a whole bunch of
stuff had to be created from scratch (roll-your-own operation). recent
post mentioning 3081 service processor
http://www.garlic.com/~lynn/2010d.html#43 What was old is new again (water chilled)

the trade-off was having a more sophisticated environment for service
processor operation but having to invent/develop everything from scratch
... vis-a-vis having an off-the-shelf more sophisticated infrastructure
that might possibly have some things that weren't absolutely
required. 3090 service processor was getting to the point where it
wasn't practical to be inventing/developing everything from scratch.

one of the funnies in the 3081 service processor was that its disk drive
was 3310 FBA (versus the 3370 FBA used by vm for 3092) ... and the 3081
service processor needed to do paging operations. the 3081 didn't have
enough storage for all the microcode ... so there were some 3081
operations that involved the service processor doing microcode paging
from the 3310 FBA device.

the 3090 engineers would point out that some performance differences
between 3081 and 3090 ... was there weren't any critical performance
paths that required paging microcode.

Why does Intel favor thin rectangular CPUs?

MitchAlsup <MitchAlsup@aol.com> writes:
I have been thinking along these lines...

Consider a chip containing CPUs sitting in a package with a small-
medium number of DRAM chips. The CPU and DRAM chips orchestrated with
an interface that exploits the on die wire density that cannot escape
the package boundary.

A: make this DRAM the only parts of the coherent memory
B: use more conventional FBDIMM channels to an extended core storage
C: perform all <disk, network, high speed> I/O to the ECS
D: page ECS to the on die DRAM as a single page sized burst at FBDIMM
speeds
E: an efficient on-CPU-chip TLB shootdown mechanism <or coherent TLB>

A page copy to an FBDIMM resident page would take about 150-200 ns;
and this is about the access time of a single line if the whole ECS
was made coherent!

F: a larger ECS can be built <if desired> by implementing a FBDIMM
multiplexer

this was somewhat the 3090 in the 80s (but room full of boxes)
... modulo not quite doing i/o into/out-of expanded store. the issue was
that physical packaging couldn't get all the necessary real storage
within the processor latency requirements.

there was a wide-bus, (relatively) very fast synchronous instruction
that moved 4k bytes between processor storage and expanded store.

at the time, i complained about not being able to do i/o directly
into/out-of expanded store.

there was something half-way in-between ... when attempting to support
HIPPI, the standard 3090 i/o interface couldn't support the bandwidth
... so they hacked into the side of the expanded store bus for HIPPI
I/O.

re: vm/sp performance; almost all heavily loaded, large systems that
have gone to vm/sp that i'm aware of, has experienced system
degradation compared to whatever they were running perviously. In most
cases they have been able to bring performance up to an acceptable
level thru variuous code twiddling. As far as I know nobody has found
whatever problems are responsible for the system degradation, that all
performance improvements would have been equally applicable to
whatever level of the system they were running previously (i.e. 1)
whatever bug(s) are still there and 2) they could have gotten a much
better performaning system by making the modifications to a pre-SP
level system).
... snip ... top of post, old email index.

time to switch heads includes complete propagation delay between end
of data transfer on previous data block thru all the channel ccw
fetches (transferring appropriate data down to the control unit, etc.)
and get to the next data transfer ccw before the r/w heads get to the
next data block. page format (for cp) of 4k blocks on 3330 track only
leaves room for about 100 byte dummy filler records. Now . . .

First thing you do in cp is that if you know that you are chaining CCW
"packages" is to eliminate the SET SECTOR from the TIC'ed to ccw
packages (by overlaying the set sector with a "copy" of the seek and
tic'ing to the 2nd ccw in the package). Since the rotational delay is
minimal, set sector only increase the delay time in the channel for
fetching and transferring data.

Since the specs. on the 3330 require a dummy filler gap of greater
than what is available, IBM code didn't bother to support switching
heads on the 3330. Boston Univ. (running on a 145) did some
experimenting and found that head switching did in fact work for their
machine. About 3 years ago we picked up that code (which includes TEST
mode running where "hits&misses" are monitored). We found that on our
158 that it was missing 80% of the time. Since our floor system also
runs on several GPD machines, we checked its operation on 4341, 3031s,
and 3033s. On 4341 it worked fine, on 3031 and 3033 it had the same
operating characteristics as 158. I then talked to a customer with a
168 which did some further experimenting for me. He found that it
would "hit" 100% of the time using IBM & CDC drives with the expanded
filler. He also found that on Memorex drives he could "crank" the
filler record down to 50 bytes and still get 100% hits.

The 4341 essentially has the same channel hardware as the 145 (in fact
a standard 4341 with minimal microcode changes is used for 3meg.
channel testing). The 158 channels are the "slowest" channels of the
370 line. The similarity between the 158 and the 303x is because of
the channel directors. The same channel director hardware is shared
across the whole 303x line. A channel director is essentially a
stripped down 158 (i.e. 158 integrated channel microcode).

Anyway, most of the delay is not in the actual head switching but in
all the delays back in the channel to process all the CCWs between the
end of the previous data transfer and the start of the next one. I
haven't bothered to actually figure out what would be the required
filler block using channel directors. Might try at least double and
start cutting it back down from there if you get constant "hits".
... snip ... top of post, old email index.

note comment in above about 4341 being used for 3mbyte/sec channel
testing. the other alternative for retrofitting (3mbyte/sec) 3380s to
earlier machines was "Calypso" speed-matching addition to 3880
controller ... that was basically the start of ECKD ... and long litney
of various kinds of emulation to avoid having to move past CKD.

from above:
According to the RFI, the service's existing IT infrastructure is
outdated and at risk of failing. Forty-two mission-oriented applications
run on a 1980s IBM mainframe with a 68 percent performance reliability
rating, it said. In addition, data systems and IT security don't
meet requirements, the service said.
... snip ...

Lynn, do you remember some notes or calls about putting DUMPRX into an
IBM product? Well .....

From the last time I asked you for help you know I work in the
3090/3092 development/support group. We use DUMPRX exclusively for
looking at testfloor and field problems (VM and CP dumps). What I
pushed for back aways and what I am pushing for now is to include
DUMPRX as part of our released code for the 3092 Processor Controller.

I think the only things I need are your approval and the source for
RXDMPS.

I'm not sure if I want to go with or without XEDIT support since we do
not have the new XEDIT.

In any case, we (3090/3092 development) would assume full
responsibility for DUMPRX as we release it. Any changes/enhancements
would be communicated back to you.

If you have any questions or concerns please give me a call. I'll be
on vacation from 12/24 through 01/04.
... snip ... top of post, old email index.

Agile Workforce

early in my career, i was associated with rapidly growing development
effort ... so they had five year plan that involved adding hundreds of
people and huge number of individual items spread over five years
(before spreadsheets and other automated aids that managed such
things).

i had done dynamic adaptive resource management (for computing
resources as undergraduate) so got stuck with managing the plan. there
were also corporate politics involved ... and there would be weekly
calls from hdqtrs with people asking ridiculously trivial what-if
questions about the plan ... things that represented small fractions
of a percent in the overall scope of things .... but given the
resources of the group at the time ... could have completely buried
the whole group coming up with answers. So i memorized the whole thing
... and got practiced to the point that i could answer the questions
as fast as hdqtrs could pose their (ridiculously insignificant
trivial) questions.

much later there was a study of succesful startups in silicon valley
... that claimed that the single most common characteristic of a
succesful startup was that they had completely changed their business
plan at least once within the first two years (i.e. agility was much
more important than accuracy).

I also sponsored John Boyd's briefings at IBM ... including his
Organic Design for Command and Control.

More recently he was credited with the battle plan for the previous
effort in the middle east ... and people involved in the current
middle east activities have commented that one of the biggest problems
was that John had died in the interim. More recent reference to John
"There are two career paths in front of you, and you have to choose
which path you will follow. One path leads to promotions, titles, and
positions of distinction.... The other path leads to doing things that
are truly significant for the Air Force, but the rewards will quite
often be a kick in the stomach because you may have to cross swords
with the party line on occasion. You can't go down both paths, you
have to choose. Do you want to be a man of distinction or do you want
to do things that really influence the shape of the Air Force? To be
or to do, that is the question." Colonel John R. Boyd, USAF 1927-1997

From the dedication of Boyd Hall, United States Air Force Weapons
School, Nellis Air Force Base, Nevada. 17 September 1999

In addition to his OODA-loop theories beginning to show up in business
and management programs ... there are starting to be references to his
to be or to do line. His OODA-loop theories have also been used to
rewrite Marine training oriented towards a much more adaptable and
agile work force.

His observation about change that ww2 rigid, top-down,
command&control structure training has had on american business
culture, has also been used to explain the enormous bloat in executive
compensation. There have been reports that the ratio of executive
compensation to worker compensation has exploded to 400:1 after having
been 20:1 for long time (and 10:1 in much of the rest of the
world). The WW2 scenario has only a very few at the very top
understanding what they are doing and the great masses of workers are
unskilled and require the rigid, top-down command & control structure
in order to make accomplishments. The very few with skills with huge
masses of unskilled is then used to justify the 400:1 compensation
ratios.

Byte Tokens in BASIC

Eric Chomko <pne.chomko@comcast.net> writes:
If you take Gates' coding ability as a standard of his wealth, then
more than enough folks around here should be worth 100s of millions of
dollars at the very least!

i think that it is more like wealth inversely related to the time spent
coding ... more akin to Boyd's to be or to do
"There are two career paths in front of you, and you have to choose
which path you will follow. One path leads to promotions, titles, and
positions of distinction.... The other path leads to doing things that
are truly significant for the Air Force, but the rewards will quite
often be a kick in the stomach because you may have to cross swords
with the party line on occasion. You can't go down both paths, you
have to choose. Do you want to be a man of distinction or do you want
to do things that really influence the shape of the Air Force? To be
or to do, that is the question." Colonel John R. Boyd, USAF 1927-1997

From the dedication of Boyd Hall, United States Air Force Weapons
School, Nellis Air Force Base, Nevada. 17 September 1999

Credit card tokenization technology can help better protect credit
card data, but a lack of industry standards and complexity issues pose
problems for merchants, according to a panel at the 2010 RSA
Conference

... snip ...

There were associations mandates for this in the 90s ... the issue was
that consumers would reference dispute by account number and date/time
.... not by token ... and there wasn't sufficient dataprocessing
infrastruction to provide the mapping from account number and
transaction date/time ... to the transaction specific token (or
transaction-id)

The underlying issue is what ever value is being used is being forced
into dual-purpose role ... both for the business process of managing
the transaction .... as well as (effectively) something you know
authentication. The token scenario ... attempts to introduce a
one-time use transaction dual-purpose business handle/authentication
.... rather than fixing the paradigm of having the same value act as
both the business transaction identifier as well as the authentication
value.

The transaction-id scenario doesn't fix the underlying problem
... just attempts to somewhat reduce the scope of the vulnerability.

Multi-factor authentication will solve the problems of online
banking. In a blog posting on the threatpost website Roel
Schouwenberg, a senior anti-virus researcher in Kaspersky Lab's global
research and analysis team, claimed that

... snip ...

Eliminating the dual-use nature of the account number (being used for
transaction related business processes as well as something you know
authentication) ... goes a long way to fixing many of the
vulnerabilities in the existing infrastructure.

Peter Flass <Peter_Flass@Yahoo.com> writes:
Yes, but the system/z software isn't available - at any price. You
can run it, but IBM won't sell it to you to run on a "non-approved"
system. They used to partner with the authors of the Flex/es emulator
(Rational Software?), but the yanked the agreement leaving customers
hanging. There was some talk about them (IBM) coming out with their
own emulator, but I haven't heard more about it.

*Legally* all you can run on Hercules emulating system/z is 64-bit
z/Linux (native, z/VM is also unavailable to host it).

The first time I was scheduling Boyd's briefing at IBM, I tried to do
it thru employee education ... who initially agreed. However, after I
provided more information about Boyd's briefings, they changed their
mind. Effectively what they said was that the company spends a lot of
money educating/training managers ... and presenting some of Boyd's
ideas to ordinary employees might be counterproductive to all that
management training. Employee education suggested that I might
consider limiting Boyd's audience to just people in corporate
competitive analysis departments.

Boyd made a couple of references doing a years stint running "spook
base". One of his biographies has a reference that "spook base" was a
$2.5B windfall for IBM.

One of Boyd's WW2 examples were shermans vis-a-vis tigers .... tigers
having something like 10:1 kill ratio ... but the US could produce
massive numbers of shermans and win by overwhelming numbers and
logistics (the downside being some loss of tank crew morale
... because they were being used as cannon fodder).

Boyd's counter example was Guderian's verbal orders only during the
blitzkrieg ... Guderian wanted the local commander to feel free to
make decisions on the spot w/o having to worry about thick pile of
paper CYA.

About that time, we were having a corporate audit. 6670s&sherpas were
being deployed around the building in nearly every dept
(dept. secretary or office supply) for computer output. The machines
(basically copier 3s with computer interface) had the alternate paper
drawer loaded with colored paper. The computer printer driver has
modified to print an output "separator" page from the alternate paper
drawer. The separator page was mostly blank ... so the driver was
enhanced to randomly select quotations from a file (to also print on
the separator page). An afterhour sweep by the auditors for unsecured
confidential information, found an unclassified output on one of the
6670s, where the separator page had the definition of an auditor
(people that go around the battlefield after a war, stabbing the
wounded). They tried to lodge a complaint that we had placed it there
on purpose.

It wasn't true, but we were having something of a disagreement with
the auditors regarding demo programs. We had a large collection of
demo programs that the auditors were classifying as games and wanted
them removed from all corporate computers. They were pushing for
modifying the logon screen to say only could be used for business
purposes only. We were pushing for the logon screen to say only could
be used for management approved purposes. Demo programs were a
valuable tool in educating people about characteristics of online
computing.

Boyd closed Organic Design for Command and Control ... after
highlighting the pervasive command & control mentality (including in
the business world) ... that it should be replaced with "leadership &
appreciation"

the above discusses potentially disabling for i/o interrupts. as it
mentions .... I had done something similar a decade earlier ... would
dynamically change based on I/O interrupt rate crossing some
threshhold.

we had been called in to consult with small client/server startup
that wanted to do payment transactions on their server ... they had
also invented this technology called "SSL" they wanted to use ... the
result is now frequently referred to as electronic commerce.

somewhat as a result, we were invited to participate in the x9a10
financial standard working group ... which had been given the
requirement to preserve the integrity of the financial infrastructure
for ALLretail payments (aka a standard for POS, debit, credit, ach,
stored-value, internet, attended, unattended, transit turnstile,
online banking ... aka ALL ... needed to be lightweight enuf to
perform within the time & power constraints of transit turnstile
... but with enough integrity that it can be used for very high-value
transactions). part of the x9.59 standard slightly tweak the paradigm
to also eliminate the breach vulnerability (part of x9a10 financial
standard working group was detailed end-to-end threat and vulnerabilty
studies of different environments. some references to x9.59
http://www.garlic.com/~lynn/x959.html#x959

at the time, the associations had a number of different groups doing a
number of different specifications for various environments ... aka
one specification for POS ... and different specification(s) for the
internet.

some trivia ... that early electronic commerce involved internet SSL
connection between webservers and payment gateway ... and then the
payment gateway simulated a payment concentrator with dedicated leased
line into acquiriing processor. the first "protocol" implemented had
been used by the acquirer and shift4 terminals.

Impact of solid-state drives

Chris Friesen <cbf123@mail.usask.ca> writes:
Of course it's better not to swap. However, given a specific machine
and a specific workload, there may be no possible way to fit both the
code and the data set into RAM at the same time.

If swapping out a page of code that gets executed extremely rarely
allows the data set to fit in RAM and the app runs 10x faster, I'm all
for paging executable code.

recent reference to somebody's decision to bias page replacement
algorithm to non-changed pages ... (i.e. less effort, since replaced
paged didn't have to be written out first, valid copy was still on
paging device) ... was that way for nearly a decade (most of the 70s)
before they realized that they were replacing high-use, shared
executable code before much lower-use, private data pages
http://www.garlic.com/~lynn/2010d.html#78 LPARs: More or Less?

there was some facetious reference about getting paid for doing it the
wrong way ... so later can get award for fixing the problem.

gahenke@GMAIL.COM (George Henke) writes:
The current trend towards CMMI and the Six Sigma standard of quality, 6
standard deviations (3.4 defects in a million) instead of the typical 3
standards deviations (1 defect in 370) points to the demand for quality,
excellence, and perfection in everything.

In the early 90s, one of the big-3 had C4 task force that was
looking at improving their competitive position ... and invited in some
number of technology vendors to participate. They went thru majority of
the issues with respect to their current state and foreign
competitors. One of the big issues was major foreign competitor had
reduced elapsed cycle to produce a (totally new) product (from idea to
rolling off the line) from 7-8 years to 2-3 years (and looking at
dropping below 2 years). Big part of C4 was leveraging technology as
part of drastically reducing elapsed time for product cycle.

I chided some of the mainframe brethern attending the meetings about
being there to offer advice on reducing product cycle from 7-8yrs to
2-3yrs (when they were still on long product cycle).

Within the domesitic auto industry ... although they could very clearly
articulate all the important issues ... the status quo was so entrenched
that they found it difficult to change.

i.e. aggregate computing cost (for everybody) is actually less with
single part number ... than if there were large number of different
parts.

in early 80s, major analysis of vm/4341s going into every nook & cranny
versus "big iron" in the datacenter, was the enormously greater "big
iron" expense involved in adding capacity. this can somewhat also be
seen with returning to the old timesharing days with cloud computing.

having extra capacity already available at the customer site ... is
analogous to having on-site spare part depot &/or on-site CE.

we would go by somers and discuss with some of the occupants that the
company (especially mainframe business) was facing similar issues; they
would all essentially agree and (also) be able to clearly articulate the
issues ... and then we would go back the next month or a couple months
... and nothing had changed.

there seemed to be a strong sense that they were (also) trying to
preserve the status quo until their retirement, leaving corrective
action to somebody else.

then the company went into the red ... and some of the status quo and
vested interests were being forced to change (compared to auto industry
that managed to preserve status guo & vested interests across years and
years in the red).

rich@VELOCITYSOFTWARE.COM (Rich Smrcina) writes:
Most of the new support now centers around capacity (very large
virtual machines) and virtual networking (virtual switch). There is a
statement of direction for clustering (not sysplex) and guest
migration (moving live machines between VM systems). All of this with
the intent of supporting Linux.

note that some of the commercial (virtual machine) timesharing service
bureaus had done moving live machines between VM systems in the early
70s, it was combination of 7x24 operation and providing services to
customers around the world .... addressing problem that there was no
down period for service ... where downtime/outages could be tolerated
for things like preventive maintenance. They would migrate virtual
machines as part of dynamically taking complexes offline (out of the
cluster)for service. misc. past posts mentioning the virtual machine
commercial timesharing services dating back to 60s:
http://www.garlic.com/~lynn/submain.html#timeshare

The largest such of the clustering ("single system image") operations in
the late 70s (not limited to vm systems, but any mainframe complex,
anywhere) was the US VM-based internal (worldwide) sales&marketing
support HONE system that. The US HONE centers had been consolidated in
bldg. (they no longer occupy the bldg, but it is located next door to
the new facebook bldg ). This US datacenter was the largest (although
other HONE clones had started to spring up several places around the
world) with load-balancing and fall-over recovery. Then because of
earthquake concern, in the early 80s, the cal. center was first
replicated in Dallas and then a 3rd in Boulder (with load-balancing and
fall-over across the redundant centers). a few recent posts in "Greater
IBM" discussing HONE:
http://www.garlic.com/~lynn/2010d.html#27 HONE & VMSHARE
http://www.garlic.com/~lynn/2010e.html#24 Unbundling & HONE
http://www.garlic.com/~lynn/2010e.html#25 HONE Compute Intensive
http://www.garlic.com/~lynn/2010e.html#29 HONE & VMSHARE

The product to customers saw big impact when POK got the development
group shutdown and all the people moved to POK to support MVS/XA
development. initially the product was also killed, Endicott managed to
save the product mission... but had to reconstitute a group from
scratch. This possibly contributed to the VM/SP quality mentioned in
Melinda's history ... recent reference:
http://www.garlic.com/~lynn/2010e.html#31 What was old is new again (water chilled)
http://www.garlic.com/~lynn/2010e.html#36 What was old is new again (water chilled)

The 4331/4341 saw big explosion in distributed machines connected via
network ... so the (their new) focus wasn't on the highend and/or
clustering in single location.

There was a research clustering project, eight 4341s with 3088 ... that
eventually offered as product ... but that ran into a couple problems

1) there was already pressure from high-end (mvs, 3033, 3081s, etc)
where customers found that multiple vm/4341s were much more cost
effective than the big iron ... so anything that enhanced this, was met
with opposition.

2) the internal product had things like cluster-wide operations taking
very small subsecond elapsed time. however, for product ship they were
forced to move to SNA ... and all of a sudden simple cluster-wide
coordination operations were taking nearly a minute elapsed time. This
is similar to the on-going SNA battles that my wife faced when she had
been con'ed into going to POK to be in charge of (high-end) mainframe
loosely-coupled architecture. The battles with SNA (ephemeral temporary
truces where she could use anything she wanted with datacenter walls but
SNA had to be used by anything crossing the walls of the datacenters)
and very little uptake at the time (except for IMS hot-standby until
sysplex) met that she didn't stay long ... misc. past posts mentioning
her peer-coupled shared data architecture
http://www.garlic.com/~lynn/submain.html#shareddata

LPARs: More or Less?

"Dave" <g8mqw@yahoo.com> writes:
Well "VM" or "CP" is very different. As the XA/ESA/z machine can't be
virtualized easily in software, I assume because of the need to make
AMODE switchs efficient, CP now relies on the SIE instruction to
provide virtual machines. I gather this uses the same "hardware" that
LPARs use. I guess you might consider this a "superset" of the VM
Assists that were available on S/370...

XA wasn't as easily virtualized as 360 & 370 was ... and therefor the
SIE instruction. SIE predates PR/SM and LPARs ... in that sense PR/SM
and LPARs were leveraging the pre-existing infrastructure for microcode
assists and SIE (and for some time, some of the assist microcode was
mutually exclusive, either for VM use or LPAR use ... but not both).

In the 360/370 scenario ... LPSW and interrupts loaded new PSW ... which
simultaneously switched address space and privilege/problem mode in
single operation (in MVS, a copy of the kernel appears in every address
space ... where in cp/vm, the kernel and the guest address spaces are
totally distinct).

SIE was able to change address spaces and privilege/problem mode in
single operation, as well as set a flag for privilege instructions ...
basically "assist mode" that indicates privilege instruction is executed
according to virtual machine rules (as opposed to real machine rules,
basically each assisted privilege instruction has modified
microcode). To simulataneously use the assists for LPARs and virtual
machines ... effectively needed each privilege instruction microcode to
be further modified to recognize 1) real machine, no LPAR, no virtual
machine, 2) LPAR, no virtual machine, 3) virtual machine, no LPAR, 4)
both LPAR and virtual machine. From a microcode standpoint, LPAR+VM is
similar to virtualizing SIE (i.e. running a guest VM system under a VM
virtual machine).

the other is the emulated implementations ... like Hercules
... implemented on Intel platform; this could be considered analogous
to the entry & mid-range mainframe implementations that had been
done in vertical microcode.

There was a separate performance issue going from virtual MVT guests to
virtual SVS/MVS guests ... since VM started out simulating the TLB
(hardware look aside buffer implementing virtual memory) with shadow
page tables. The management of the entries in the shadow page tables
with software was enormously slower than the hardware overhead involved
in managing TLB entries. There could also be pathelogical page
replacement algorithm behavior ... with MVS approximating a LRU page
replacement and VM also approximating a LRU page replacement ... VM
might be selecting the MVS guest page for removal from real storage
... moments before the MVS guest desides that it is the ideal next page
to use (VM deciding that since it hasn't been used, it can be removed
from real storage and MVS deciding that since it hasn't been used, it
can be reassigned for some other use).

the co-op students graduate that spring and one of them goes to work for
a VM-based online commercial timesharing service bureau (started by
head of MIT lincoln labs & some of the other cp67 people in the area).

lot of the code he worked on doesn't ship for awhile ... so he
recreates the implementations; shared-segment extensions, page
migration, administrative address spaces (used to package kernel
control blocks and migrate them to disk paging areas). Adding
loosely-coupled shared device support allows page&spool to be migrated
off of devices that need to be taken offline for service, also
complete virtual machine description can be moved out of one processor
and back into a different processor (processor complex needing to go
offline for service). That was done along with front-end
load-balancing and single-system-image clustering support.

some of these vm-based timesharing companies move up value stream by
offering online financial information to financial community (stuff like
100? years of stock closing prices ... service delivery being done via
the paged-mapped, shared segment facility). much later, they show up
offering the services on the web ... and this company even shows up in
the financial crisis.

congressional hearings highlighted that rating agencies played major
role in the current crisis by selling "triple-A ratings" for toxic
CDOs. The unregulated (non-depository) loan originators had their
source of funds enormously increased when they were able to pay for
"triple-A ratings" w/o regard to the actual value. The hearings
pointed out that the seeds for this conflict of interest was sown in
the early 70s, when the rating agencies switched from buyers paying
for the ratings to the sellers paying for the ratings. By being able
to pay for triple-A ratings, not only enormously increased the source
of funds to the unregulated (non-depository) loan originators but also
eliminated any concern they had for loan quality or borrower
qualification (since they could always get triple-A ratings and
immediately sell off at premium). Speculators sucked them up since
no-down, no-documentation, one-percent ARMs with interest only
payments were significantly less than real-estate inflation (with
speculation frenzy further fueling inflation; planning on flipping
before ARM adjusted; 20 percent inflation in some markets, with 1%
interest only payments, potentially nearly 2000% ROI).

This particular (started out as vm-based commercial timesharing
service bureau) company bought the "pricing services division" from
one of the rating agencies about the time the congressional testimony
said that the rating agencies switched who paid for the ratings (and
created the opening for conflict of interest).

In early Jan2009, the same company shows up in the news as helping the
fed. gov. price the troubled/toxic assets for purchase with the TARP
funds (possibly the pricing services division purchased nearly 40yrs
earlier?). This is before the gov. realized that the amount of
appropriated TARP funds was small drop in the bucket compared to just
the total troubled assets being carried offbook by the four largest
too-big-to-fail institutions (and the gov. is forced to invent other
ways for using TARP funds and saving the economy).

z9 / z10 instruction speed(s)

lists@AKPHS.COM (Phil Smith III) writes:
If you look at carefully written PC software like, say, Steve Gibson's
stuff (www.grc.com -- not a plug, just an example that comes to mind),
you'll see incredibly rich and powerful stuff that fits in the palm of
your PC's hand, so to speak. http://www.grc.com/freepopular.htm has
dozens of apps, most in the 25K (yes, K!) range. So it isn't
impossible to write tight software for PCs, just discouraged by the
apparent lack of need and ubiquity of IDEs that produce bloat.

recent thread about redoing part of res system ... sized that something
like ten rs/6000 580s could have handled the full world-wide activity
.... way outperforming large datacenter of TPF systems (same load
projected to take a couple hundred es9000 processors) .... and current
treo (xscale processor) theoritically has approx. compute power of those
ten 580s.
http://www.garlic.com/~lynn/2010b.html#80 Happy DEC-10 Day
http://www.garlic.com/~lynn/2010c.html#19 Processes' memory

"Charlie Gibbs" <cgibbs@kltpzyxm.invalid> writes:
Oh, we had just about every other language imaginable, and probably
a few that weren't. WATFOR, FORTRAN G, Assembler G, PL/I, Snobol4,
LISP, UMIST, APL, Algol 60, Algol 68, Algol W, pl/360 (the misbegotten
bastard child of Algol and assembly language)... In fact, there was
one course whose sole purpuse was to feed you a new language every
two weeks. (I dropped out before I had to take that one.)

from above:
One could say PRINT ACROSS MONTH SUM SALES BY DIVISION and receive a
report that would have taken many hundreds of lines of Cobol to
produce. The product grew in capability and in revenue, both to NCSS and
to Mathematica, who enjoyed increasing royalty payments from the sizable
customer base.
... snip ...

from above:
One could say PRINT ACROSS MONTH SUM SALES BY DIVISION and receive a
report that would have taken many hundreds of lines of Cobol to
produce. The product grew in capability and in revenue, both to NCSS and
to Mathematica, who enjoyed increasing royalty payments from the sizable
customer base.
... snip ...

from above:
All three products flourished during the 1970s and early 1980s, but
Mathematica's time ran out in the mid-80s, and NCSS also failed, a
victim of the personal computing revolution which obviated commercial
timesharing (although it has since been revived in the form of ASPs and
shared web servers).
... snip ...

from above:
Nomad was claimed to be the first commercial product to incorporate
relational database concepts. This seems to be borne out by the launch
dates of the well-known early RDBMS vendors, which first emerged in the
late 70s and early 80s -- such as Oracle (1977), Informix (1980), and
Unify (1980). The seminal non-commercial research project into RDBMS
concepts was IBM's System R, first installed at IBM locations in
1977. System R included and tested the original SQL implementation.
... snip ...

from above:
Jim Gray: In about 1972 Stonebraker got a grant to do a geo-query
database system. It was going to be used for studies of urban
planning. The project did do some geographic database stuff, but fairly
quickly it gravitated to building a relational database system. The
result was the INGRES system[20]. INGRES started in about 1972 and a
whole series of things spun off from that: Ingres[21], Britton-Lee, and
Sybase.
... snip ...

z9 / z10 instruction speed(s)

jmfbahciv <jmfbahciv@aol> writes:
I think the hardest thing to do is get a project started. It seems
to take one person whose enthusiasm cannot be quenched to do
all the preliminary work required to setup a project. Bosses
don't tend to do this. DECUS had an accounting session at every
Fall and Spring meeting asking DEC to do "something" about the
system usage gathering mechanism. Every customer had their
own _unique_ solution which didn't mesh with anybody else's.
No monitor developer was interested in doing the mundane
thing called accounting. Most, as in 99%, considered the
whole thing a PITA.

the preservation of the status quo went way beyond difficulty in getting
something new started ... this was (relatively) large numbers feeling
that they had significant vested interest in the way things were; aka
there was significant additional compensation having 30yrs experience in
that status guo ... which would not be so highly prized in a totally
different environment ... trying to stall any changes until they
retired, some analogy to rearguard action during retreat.

the auto industry seemed to survive a couple decades with significant
amounts of red ink w/o having to make significant changes ... all of the
major stakeholders involved in the industry trying to preserve the
status quo ... more & more individuals seeing retirement just around the
corner and trying to preserve seniority, privileges & status quo until
after they are gone.

my dynamic adaptive resource manager was dependent on a lot
instrumentation that was also used for accounting. part of accounting is
creation & keeping of all the information ... somewhat separate from
accounting as in billing for the resource useage. another application of
the useage data was for workload profiling and evoluation into things
like capacity planning.

IBM Plans to Discontinue REDBOOK Series

lists@AKPHS.COM (Phil Smith III) writes:
So it's not quite as clear from the beancounter side of the street,
since those last are impossible to quantify.

But since both Timothy (on IBM-MAIN) and Alan (on IBMVM) have stated
that there is no official statement of direction, then the good news
is that it's not even a rumor at this point -- it's fiction. What we
need to worry about is it becoming reality down the road.

problem is that bean counting frequently has 3month horizon ... which
is easy to quantify ... the impossible part is trying to extend past
the quarterly horizon .... and it may have to constantly be repeated
with every new generation of bean counters.

HONE had a re-accuring problem for at least a decade ... somebody from
the branch would be promoted to DP hdqtrs that included HONE
organization. At some point the branch person would become aware that
HONE was not MVS (but virtual machine) based ... and decide
that their mark on the corporation would be made by moving HONE off
the VM platform to MVS. They then would direct the HONE organization
(stop whatever they were doing and) to port everything to MVS ... it
might take a year before there was enough evidence (for upper
executives) that it wasn't practical for HONE to operate on MVS
platform ... and then the HONE organization would return to their
normal duties for a couple months until that executive was promoted
and somebody new came in to repeat the process.

lefuller@SBCGLOBAL.NET (Lloyd Fuller) writes:
I respect your knowledge, Lynn, but I cannot let that go by without
saying somethings.

1. NCSS did not go away because of the PC revolution: they gave up
after D&B bought them. I worked there at the time on VP/CSS. There
are MANY things that we did with VP/CSS that even PCs, z/VM and z/OS
still cannot do (look up PROTECT EXEC in the old VM requirements for
example). The powers that be at NCSS decided that although it was
profitable (very much so), since D&B required its subsidiaries to be
number 1 or 2 in the market, they could not cut that and gave up on
time-sharing.

2. NOMAD has not gone away. In fact SElect Business Solutions would
still be VERY happy to sell you a license today for NOMAD or
UltraQuest.

from above:
Five of the National CSS principals participated in a recorded telephone
conference call with a moderator addressing the history of the company's
use of RAMIS and development of NOMAD. The licensing of RAMIS from
Mathematica and the reasons for building their own product are discussed
as well as the marketing of RAMIS for developing applications and then
the ongoing revenue from using these applications. The development of
NOMAD is discussed in detail along with its initial introduction into
the marketplace as a new offering not as a migration from RAMIS. The
later history of NOMAD is reviewed, including the failure to build a
successor product and the inability to construct a viable PC version of
NOMAD.
... snip ...

somebody/somewhere upthread with original (ibm-main) post that started
out with abc news reference that apparently had just discovered the RFI
from last year. there were some statements that situation might be the
result of some recent budgetary policy (as opposed to something that
dates back possible a decade or more).

Thread started late feb, just before 1mar2010 ... which was 7yrs after
(1mar2003) secret service was transferred to homeland defense (from
treasury). there have been past news references that after the 1mar2003
transfer something like 1/3rd of secret service budget had disappeared
into homeland defense ... in a period when secret service was being
tasked with more activities (and having to make do with 2/3rds the
budget).

when I was undergraduate at univ. datacenter ... the datacenter sort of
went thru the opposite ... making the case to state legislature that the
datacenter should operate as independent entity. the complaint was that
up until then ... "powerful" depts could run rough shod over the
datacenter ... using resources way in excess of any actual budget
contribution (since the datacenter accounting was purely fictitious
matter).

the change at the legislature was that explicit funds showed up in
univ. budget for each dept and was actually transferred to the books of
the univ. datacenter ... which had become an independent entity from the
univ. the change put the datacenter on a much better fiscal footing
... actually having real budget from their users ... in proportion to
actual resource useage ... which, in turn, allowed datacenter to do
budget planning and equipment ordering (up until then some depts. could
use enormous amount of resources for which the datacenter wasn't
provisioned to handle ... which then resulted in less powerful depts
having to do with less than what they were budgeted for).

Up until then there was disconnect between the funny money that the
datacenter was billing for useage ... and the operating budget that the
datacenter was given for purchase, leases, expenses, payroll, etc. With
the budget/accounting change, the datacenter could actually provision
the equipment/software/staff for providing services that they were being
paid to provide (in effect, the budget from the 2nd class depts. was
being misappropriated to provide services for the more powerful depts
... out of proportion to what they were paying).

z9 / z10 instruction speed(s)

tony@HARMINC.NET (Tony Harminc) writes:
I don't know about a total IT budget of $38k, but in 1975 licensed software
was pretty much a novelty. The first priced version of MVS (or any other IBM
OS except perhaps ACP/TPF?) had yet to appear, and most software was written
in house. Some shops were using priced IBM software like PL/I Optimizer, or
the COBOL equivalent, and bigger places ran non-free IMS and/or CICS. And
there were priced (duh) products from other vendors, like Syncsort. But
buying run-the-business application packages was pretty rare.

23jun69 unbundling announcement started charging for software, SE
services, other stuff (result of various litigation). however,
corporation managed to make the case that kernel software should
still be free ... misc. past posts mentioning unbundling
http://www.garlic.com/~lynn/submain.html#unbundling

there was mad rush to get products back into the 370 product pipeline
(both hardware & software; aka FS was planned to replace 370 something
that was radically different from 370). the lack of products in the
pipeline was also used to explain the clone processors being able to
gain foothold in the market. as a result, there was decision to also
start charging for kernel software.

I had been doing 360/370 during the FS period (and even made some less
than complimentary observations about the FS activity) ... somewhat as a
result ... some of that stuff was picked up as part of basic vm370
release 3. There was then decision made to package up lots of my other
stuff and release it as an independent "resource manager" ... nad it was
selected to be the guinea pig for kernel software charging. I got to
spend lots of time with business, planning, legal groups on the subject
of kernel software charging. Part of the policy was that kernel software
directly involved in hardware support was to remain free ... but other
stuff could be charged for.

The "resource manager" was shipped as separately charged for product.
The pricing of such software had to at least cover the cost of the
development (aka pricing couldn't be done at leas than the development
cost). Getting close to release ... the work on deciding price for MVS
resource manager had been done ... and the direction was that my
resource manager couldn't be shipped at lower price than what was being
planned for the MVS resource manager (although the MVS development costs
had been enormously greater ... all the work that I had done with regard
to my costs for setting price ... just went out the window).
http://www.garlic.com/~lynn/subtopic.html#fairsharehttp://www.garlic.com/~lynn/subtopic.html#wsclock

During the transition period, the other kernel pricing policy was that
free software couldn't have prerequisite on charged for kernel
software. For vm370 release 4, there was decision to release
multiprocessor support (as support for hardware, would be part of the
free kernel base). The problem was that I had included in my resource
manager ... a whole bunch of code that multiprocessor support used. The
resolution was that something like 90% of the code in my "release 3"
resource manager ... was moved into the free release 4 kernel (w/o
changing the price of the resource manager).
http://www.garlic.com/~lynn/subtopic.html#smp

More & more software (both kernel & non-kernel) was being charged for.
vm/370 release 6 was still free ... but had a lower-priced kernel add-on
(bsepp, for entry level and midrange customers) that was subset of the
higher-priced SEPP (which had absorbed my resource manager and added a
bunch of other stuff).

As previously mentioned, VM/SP "Release 1" marked the end of transition
in kernel software pricing, BSEPP/SEPP were merged back into the base
kernel ... and the whole thing became charged for (the name change also
reflected change, instead of vm/370 "release 7" ... it was vm "system
product" release 1 ... i.e. a charged-for product).

eamacneil@YAHOO.CA (Ted MacNEIL) writes:
A contention which I disagree with.
It's cheaper to build one type of chip/card, and use other methods to
limit capacity, which is what software pricing is based on.

aka there is a lot of upfront & fixed costs ... but volume manufacturing
techniques frequently drop the bottom out of per unit costs (i.e. per
unit price can be dominated by the upfront & fixed costs, leveraging
common unit built in larger volumes can easily offset having multiple
custom designed items).

it actually costs both the vendor and the customers quite a bit to
physically change item ... potentially significantly more than bare
bones per unit (volume) manufacturing costs ... as a result having large
number of units prestaged ... is trade-off of the extra volume
manufacturing cost of each of the units against the vendor&customer
change cost of physically adding/replacing each individual item.

it is somewhat the change-over to 3rd wave (information age). Earlier,
the cost ... and therefor perceived value, was mostly in the actual
building of something. moving into the 3rd wave, much more of the value
has moved to the design of something ... and volume manufacturing
techniques has frequently reduced the per unit building cost as close as
possible to zero.

They are now doing multi-billion dollar chip plants that are obsolete in
a few years. Manufacturing cost is the actual creation of the wafer
... with thousands of chips cut from each wafer (motivating move from
8in wafer to 12in wafer, getting more chips per wafer). The bare-bones
cost for building one additional chip ... can be a couple pennies
... however, the chip price may be set at a couple hundred (or more) in
order to recover the cost of the upfront chip design as well as the cost
of the plant.

It may then cost the vendor&customer, tens (or hundreds) of dollars to
actually physically deploy each chip where it is useful.

An economic alternative is to package a large number of chips in a
single deployment ... potentially at loss of a few cents per chip ... in
the anticipation that the extra chips might be needed at some point
(possibly being able to eliminate cost of actually having to physical
deploy each individual chip).

note that the pharmaceutical industry has been going thru similar
scenarios with brand drugs (with upfront development costs) and generic
drugs.

something similar was used as justification for the FS project ... the
corporate R&D costs was significantly higher than the vendors turning
out clone controllers ... including the one I worked on as undergraduate
http://www.garlic.com/~lynn/subtopic.html#360pcm

quote from above:
IBM tried to react by launching a major project called the 'Future
System' (FS) in the early 1970's. The idea was to get so far ahead
that the competition would never be able to keep up, and to have such
a high level of integration that it would be impossible for
competitors to follow a compatible niche strategy. However, the
project failed because the objectives were too ambitious for the
available technology. Many of the ideas that were developed were
nevertheless adapted for later generations. Once IBM had acknowledged
this failure, it launched its 'box strategy', which called for
competitiveness with all the different types of compatible
sub-systems. But this proved to be difficult because of IBM's cost
structure and its R&D spending, and the strategy only resulted in
a partial narrowing of the price gap between IBM and its rivals.
... snip ...

There have been some comments that the baroque nature of the pu4/pu5
(vtam/3705ncp) interface, did try & approximate the FS "high level of
integration" objective.

LPARs: More or Less?

stephen@HAWAII.EDU (Stephen Y Odo) writes:
and that's the rub. why the requirement that it be used ONLY for research?

none of the other vendors have such restrictions.

that's one of the biggest reasons we're moving off of the mainframe ...

telcos faced something similar with dark fiber and NSFNET backone in the
80s ... (tcp/ip is technology basis for modern internet, NSFNET
backbone was operational basis for modern internet, and CIX was business
basis for modern internet).

telcos have large fixed costs & expenses ... but recover costs based on
useage. all the fiber going into the ground enormously increased
capacity ... however w/o signicant reduction in use charges ... people
weren't going to use the extra capacity. however, any upfront reduction
in the use charges .... w/o the bandwidth hungry applications ... would
result in telcos operating at significant loss for possibly decade
(period it would take for the new bandwidth hungry applications to
evolve in an environment with drastically reduced fees).

the telcos leveraged NSFNET backbone as a commercial-free technology
incubator. The NSFNET backbone RFP was awarded for $11.2M ... and was
for non-commercial use only. Folklore is that resources put into NSFNET
backbone was closer to four times the RFP. The non-commercial use of the
NSFNET backbone would limit impact on telco revenue ... but at the same
time telcos could provide large amount of extra resources for non-profit
educational technology incubator to promote evolution of the bandwidth
hungry applications.

we had been working with NSF and various institutions leading up to
NSFNET backbone effort ... but then weren't allowed to bid on the RFP.
The director of NSF attempted to help by writing letter to corporation
asking for our help (there were also comments like what we already had
running was at least five years ahead of all the NSFNET backbone RFP
responses to build something new). Turns out that just aggravated the
internal politics.

LPARs: More or Less?

stephen@HAWAII.EDU (Stephen Y Odo) writes:
Thus we paid regular price (less 15% academic institution discount).
Which made it way too expensive for us. And paved the way for our
migration to Solaris (which was FREE whether we used it for academic OR
research OR administrative purposes).

I vaguely remember in the 60s ... the academic institution discount was
40% ... but that seemed to change with the 23jun69 unbundling
announcement (in response to gov. litigation) ... along with starting to
charge for application software, SE services, and other stuff.
http://www.garlic.com/~lynn/submain.html#unbundle

scenario here is environment where vendor actually has system, charging,
and quite a bit of operational control over resources being used.

in less structured environment there would likely be a whole lot of
piracy ... akin to what goes on with software and content on lots of
platforms (motivation behind a lot of DRM activity).

in desktop environment there has been software analogy with lots of
software being packaged with new PC ... that requires paying a fee in
return for activation. the increasing availability of high-speed
broadband internet has mitigated that somewhat ... being able to
download ... in lieu of requiring some sort of physical delivery
(startrek transporter technology for similar delivery of physical chips
isn't here yet).

got contacted about looking at possibility of doing something similar
for common processor chips (countermeasure to copy-chips and grey market
chips).

A little more topic drift, in the mid-90s, I had semi-facetiously
commented that I would take $500 milspec part, aggressively cost reduce
it by 2-3 orders of magnitude while making it more secure. A very
aggressive KISS campaign helped reduce circuits per chip ... but along
with technology decreasing circuit size ... wafer area for cuts was
exceeding chip wafer area (basically close to the EPC RFID chip
scenario, further increases in chips/wafer required new technology for
cutting wafers into chips ... that resulted in significantly lower wafer
area loss). The KISS scenario was independent of the grey market/copy
chip scenario.

search engine history, was Happy DEC-10 Day

Peter Flass <Peter_Flass@Yahoo.com> writes:
History has shown that IBM can't "force" stuff on anybody. There is a
long list of IBM products that were killed off because they didn't
sell: OS/2 (unfortunately), MicroChannel, Token Ring, etc. etc.

at least T/R sold an amazing amount ... akin to PCs that were used for
3270 terminal emulation ... just into a market that no longer exists.

Standard 3270 terminals required point-to-point coax cables ... cable
from terminal ... typically all the way back to the controller in the
datacenter. An growing problem in many buildings was the weight of all
those coax cables were starting to exceed bldg. loading levels.

It was possible to remap PC 3270 terminal emulation to CAT5 T/R that
used significantly lighter cable ... and only had to run to MAU in
nearby wiring closet ... which could support a large number of
terminals. Then only a single cable need to run from the wiring closet
down to the datacenter. The bldg. weight loading was enormously
decreased.

Large numbers of bldgs. were rewired for t/r CAT5.

The problem with microchannel and T/R (and possibly some of OS/2) was
terminal emulation paradigm hanging over all their heads.

The PC/RT was billed as engineering workstation (in unix market) with AT
compatible bus. The Austin did their own PC/RT 4mbit T/R card for the AT
bus ... trying to maximize the sustained card thruput.

The RS/6000 was upgraded to use microchannel ... however there was a
corporate edict that RS/6000 had to help their corporate brethern and
use PS2 microchannel cards. It turns out that the PS2 16mbit T/R
microchannel card had lower per card thruput than the PC/RT 4mbit T/R AT
bus card (reflecting the PS2 terminal emulation design point ... having
300-400 PS2s all sharing common 16mbit T/R).

The PS2 microchannel SCSI disk card had similar design point as well as
the PS2 microchannel graphics cards. The joke was that with the edict in
place ... except for strict numerical intensive activity ... the RS/6000
couldn't have any better thruput than PS2.

A partial solution to the corporate politics was RS/6000 730 ... which
was a "wide" 530 ... that had a vmebus with vmebus graphics card.

Other vendors were then starting to sell high-speed microchannel
ethernet cards that ran over T/R CAT5 (instead of ethernet cable, with
individual cards able to sustain full media speed). When Almaden
research center went in, they found that they got better thruput and
lower latency with 10mbit ethernet over the bldgs. cat5 than they got
with 16mbit T/R (however, lots of commercial customers continued to use
16mbit T/R for a very long time).

The high-speed ethernet cards significantly helped client/server ...
compared to the T/R terminal emulation design point. An "ethernet"
server enet adapter card needed to sustain the aggregate activity of all
its clients ... while the individual 16mbit T/R microchannel cards were
designed to share small (1/300, 1/400) pieces of the 16mbit ... aka
terminal emulation ... going into the datacenter mainframe.

Steve_Thompson@STERCOMM.COM (Thompson, Steve) writes:
This is the level of machine IBM killed when they pulled the plug on the
FLEX/ES boxes.

And those boxes (FLEX/ES) were upgradeable (as I understand it) to be
able to connect to the "standard" RAID boxes, and even have CTCA between
them (once they had ESCON capability), so that you could grow into a
"sysplex".

And what did such a box cost compared to the z10-BC?

That would have been a drop, plug and play environment (pretty much a
turn-key system).

a major FLEX platform was sequent (before ibm bought sequent). we did
some consulting for Steve Chen when he was CTO at sequent ... and there
were customers that had escon attachments (ibm connectivity) for sequent
(numa) box (up to 256 intel shared memory multiprocessor). I know of at
least one sequent numa customer (in 90s, before sequent bought by ibm)
had escon and 3590 tape drives.

(after departing) we were called in to consult with small client/server
startup that wanted to do payment transactions on their server ... the
startup had also invented this technology called "SSL" they wanted to
use (the result is now frequently called "electronic commerce").

one of the things happening during this period was lots of servers were
starting to experience heavy processor overload with web operation. the
small client/server startup was growing and having to add increasing
numbers of servers to handle their various kinds of web traffic. finally
they installed a sequent system ... and things settled down.

in turned out that sequent had fixed the networking implementation that
was absorbing majority of server processing on other platforms. sequent
explained that they had encountered the specific problem with commercial
accounts supporting 20,000 (terminal) telnet sessions ... long before
other platforms started experiencing the same networking problem with
large number of HTTP/HTTPS connections (94/95 timeframe). Somewhat
later, other platforms started distributing fixes for the tcp/ip
processor overhead problem.

for other topic drift ... part of the effort for electronic commerce was
deploying something called a "payment gateway" ... that took payment
transactions tunneled thru SSL, from webservers on the internet and
passed tham to acquiring processor. misc. past posts mentioning
payment gateway
http://www.garlic.com/~lynn/subnetwork.html#gateway

search engine history, was Happy DEC-10 Day

Peter Flass <Peter_Flass@Yahoo.com> writes:
In the early days of LANs, Token Ring and Ethernet were competitive.
If T-R performance had been there, (and IBM hadn't caused it to be
overpriced) I'm sure it would still be around. No technical reason
why not.

aka T/R wasn't being sold as LAN solution ... it was being sold as
solution to the 3270 terminal (emulation) cabling problem (including the
weight). one might claim that it isn't around any more than terminal
emulation. t/r price was considered a bargain ... when avg. labor price
to run 3270 coax from machine room to desk was $1000.
http://www.garlic.com/~lynn/subnetwork.html#emulation

one of the vendors showed some stuff that they were doing for toyota.
copper wire, wiring harness has been major point of failure ... and they
were working on inexpensive dual, counterrotating (1 mbit/sec) LAN to
replace wiring harness. all the components in the vehicle has power
distribution ... with common command&control LAN mechanism (dual
counterrotating, no single point of failure) that all switches and
components would be on.

Anne & Lynn Wheeler <lynn@garlic.com> writes:
a major FLEX platform was sequent (before ibm bought sequent). we did
some consulting for Steve Chen when he was CTO at sequent ... and there
were customers that had escon attachments (ibm connectivity) for sequent
(numa) box (up to 256 intel shared memory multiprocessor). I know of at
least one sequent numa customer (in 90s, before sequent bought by ibm)
had escon and 3590 tape drives.

In original sept. meeting we had thot we would have drives from IBM by
11/1/95 ... and would be able to loan (at least) one drive to sequent
for development testing.

they currently have one 3590 drive for the project attached to a dynix
2.3 system. the 3590 driver (w/o stacker support) will be thru beta
test on 11/1/95 ... but will continue thru various kinds of product
test/support.

We require a 3590 driver (eventually w/stacker support) for a dynix
4.2 system. Sequent estimates approximately 7-10 days to port the 2.3
driver to 4.2 ... after availability of 3590 drive on dynix 4.2 level
system.

We've tentatively estimated that we might have a loaner 3590 drive for them
on or around mid. Dec.
... snip ... top of post, old email index.

2) The writev() implementation cannot be used for scatter gather. A possible
solution would be to write a special interface for direct QCIC DMA
gathering of block fragments. To not have this means sever memory/cache
overhead in reconstructing new blocks for record inserts and length changes.
Sequent was asked to provide a cost for access to the PTX kernel code
for purposes of estimating the effort to add code to support a true
gather function for whole blocks on tape. The $$ figure was never
given.
... snip ... top of post, old email index.

Just for what it's worth, using today's Intel based processors and
IBM's zPDT "software based system" (some might call it "emulator")
we are getting well over 100 MIPS per core. (I suspect FLEX-ES
version 8 sould also be in this range, if allowed on the market.)
With a quad core processor, running 3 enabled processors, thats
somewhere in the range of 300 - 350 MIPS in a "relatively" inexpensive
system.
zPDT is only available for developers today though. IBM is very
cautions about making any comments about possible commercial
availability.
Mike Hammock
mike@hammocktree.us

note that the supercomputer market is starting to latch onto the GPUs
developed for high-end graphics in the gaming market (starting to push
thousand cores/GPU) ... it would be interesting to see if any emulator
pieces could be mapped to GPU with hundreds/thousands of cores.

however, there seems to be some additional sorting ... apparently
oriented towards overclocking & gaming marketing ... that pushes higher
rates ... and are sold at premium price. brand names are starting to
offer boxes with such chips ... when it use to be just niche, offbrand
players.

some of the reduced core chips aren't necessarily just pricing ...
sometimes it may be chip defects that would ordinarily have the whole
chip going to trashbin ... localized defects may be core specific
... rest of the chip still being useable.

late 70s & early 80s, single chip processors were starting to appear
that drastically reduced cost of building computer systems ... and saw
lots of vendors starting to move into the market. However, the cost of
producing proprietary operating system hadn't come done ... so overall
costs weren't reduced that much and therefor the price that the system
could be offered to customers wouldn't come down.

I've frequently commented those economics significantly contributed to
the move to unix offerings ... vendors could ship unix on their platform
for enormously lower cost (similar to the cost reduction offered by
single chip processors) compared to every vendor doing their own
proprietary operating system from scratch.

A similar argument was used in the IBM/ATT effort moving higher level
pieces of UNIX to stripped down TSS/370 base (the cost of adding
mainframe ras, erep, device support, etc ... being several times larger
than plain unix port). reference in this recent post mentioning adtech
conference i did (that including presentations on both the unix/ssup
activity as well as running cms applications on mvs):
http://www.garlic.com/~lynn/2010e.html#17 Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)

conference was also somewhat the origins for the VM/XB (ZM) effort (also
mentioned in the above). the effort was then declared strategic,
hundreds of people writing specs, and then collapsed under its own
weight (somewhat a mini-FS). The strategic scenario was doing
microkernel (somewhat akin to tss/370 ssup effort for ATT/unix) that had
(at least) all the mainframe ras, erep and device support ... that could
be used as common base for all the company's operating system offerings
(the costs to the company in this area was essentially fully replicated
for every operating system offering).

in later 80s, having aix/370 (project that ported UCLA's Locus
unix-clone to both 370 & 386) run under vm370 was aix/370 being able to
rely on vm370 RAS (cost of adding that RAS directly to aix/370 was many
times larger than the simple port of locus to 370).

In recent years, increasing amounts of RAS is moving into intelligent
hardware ... somewhat mitigating the duplication of effort in the
operating systems.

NSF To Fund Future Internet Architecture (FIA)

Albert Manfredi <bert22306@hotmail.com> writes:
I remember very well how IP was in principle going to be replaced by
"something better," when the time was right. That was supposed to be
the ISO network protocol suite, with the nice 160-bit address formats.
Even the US DoD, which had played a major role in developing IP, was
waiting to implement the new ISO suite. So, it's not like there was a
huge "us vs them" attitude, except perhaps within the standard bodies
themselves. And these standards bodies were NOT national in nature. At
least, that's how I remember it. Proponents of the IP suite were
certainly not ONLY Americans.

Then, ca. 1995 or 1996, the bottom literally fell out of the ISO
effort. That's when the world's focus went 100 percent to IP.

I was involved in trying to take HSP to ansi x3s3.3 (ISO chartered US
body for standards related to OSI level 3&4). ISO had guidelines about
not working on standards for anything that didn't conform to OSI
model. HSP was rejected because

2) went from top of transport to LAN MAC (bypassing OSI
transport/network interface)

3) went to LAN MAC interface ... which doesn't exist in OSI ... sits
someplace in the middle of level 3/network.

there were jokes in the period about standards process differnce between
IETF and ISO ... with IETF actually requiring interoperable
implementations while ISO didn't require that a standard to have proof
that it was even implementable.

one of the big upticks for IP was availability of the BSD
implementation. there are old folklore about ARPA constantly telling
CSRG that they couldn't work on networking ... and CSRG would "yes them
to death" (but continue to work on networking).

but for various corporate political reasons weren't allowed to bid on
the NSFNET backbone RFP. Director of NSF wrote letter to corporation
trying to change the situation ... but it just seemed to aggravate the
internal politics (there were also comments that the backbone we already
had running internally was at least five years ahead of all the NSFNET
RFP bid submissions).

in fact, we had full T1 and higher speed links operational (and we claim
that the example contributed to T1 being specified in the NSFNET
backbone RFP). The winning bid ... didn't even put in T1 links ... but
had 440kbit links ... and then apparently to somewhat seem to meet the
RFP ... had T1 trunks with telco multiplexing the 440kbit links (we made
sarcastic reference that some of the T1 trunks possibly were in turn
multiplexed ... maybe even T5 ... so they might be able to theoritically
claim a T5 network).

trivia ... what was bringing down the floor nets sunday night and monday
morning before interop '88 opened?

blackstone was the major reference in the news ... there was much
smaller reference to this other company (that had purchased the pricing
services division from one of the rating companies in the early 70s); it
wasn't clear whether they might be subcontracting doing various
calculations.

reports are that there was $27T in toxic CDOs done during the period
(only $5.2T was being carried offbook by the four largest,
too-big-to-fail financial institutions when TARP was appropriated; aka
courtesy of their unregulated investment banking arms by way of GLBA
and repeal of Glass-Steagall).

There would be several transactions along the toxic CDO value chain,
with commissions at the various points ... possibly 15-20% aggregate
commissions by the time the $27T made its way thru the labyrinth ... or
maybe $5T.

Wharton business school had an article that possibly 1000 are
responsible for 80percent of the financial mess ... $5T would be enough
to go around ... with enough left to keep the other players agreeable
... including congress. The amounts involved are so large that it would
be enough to more than overcome any possible individual concerns about
the effects on their institutions, the economy, and/or the country.

tony@HARMINC.NET (Tony Harminc) writes:
I'm still not convinced they are related. Hardware-level TLB
management would still be there for the shadow tables. In the early
days where the only TLB invalidating instruction was PTLB, which
clobbered the whole thing, the trick would presumably lie in avoiding
that instruction like the plague.

the shadow tables operation followed the same rules as for TLB. PTLB,
ISTO, ISTE, & IPTE were all instructions in the original 370 virtual
memory architecture. When 370/165 hardware group ran into problems with
the virtual memory hardware retrofit ... and wanted to drop several
features in order to buy back six months in schedule ... ISTO, ISTE, and
IPTE were part of the things from the base architrecture that were
dropped (leaving only PTLB ... i.e. everytime any invalidation occured,
everything got invalidated).

Also, original cp67 and vm370 "shadow table" only had a single "STO
stack" ... this is analogous to the 360/67 and 370/145 TLB ... where
everytime there was control register address space pointer changed
(CR0 in 360/67 and CR1 in 370/145) ... there was an implicit TLB purge
(aka all TLB entries implicitly belonged to same/single address
space). The corresponding vm370 implementation was that all shadow
table entries were invalidated/reset ... anytime there was a
("virtual") CR0/CR1 change.

The 370/168 had seven entry STO-stack ... aka every TLB entry had a 3bit
identifier (8 states, invalid, or belonging to one of seven address
spaces, 370/145 TLB entries had single bit, either valid or invalid).
Loading new CR1 value on 370/168 didn't automatically purge the whole
TLB ... it would check if the new value was one of the already loaded
saved values ... and if there was match ... it would continue. If the
new address space value loaded into CR1 didn't match a saved value ...
it would select one of the seven saved entries to be replaced ... and
invalidate/reset all TLB entries that had the matching 3bit ID.

VM370 product didn't support multiple shadow tables until the priced
kernel addon to VM370 release 5. MVS did extremely frequent change to
the CR1 value ... even w/o doing explicit PTLB ... and evertime ... VM
had to do the full invalidation of all shadow table entries ...
corresponding to the similar implicit operation that went on with
370/145 (not having a multiple entry STO-stack at least up until the
priced kernel add-on to vm370 release 5).

There was somewhat analogous issue on real 3033 hardware with the
introduction of dual-address space mode. The 3033 was effectively the
same logic design as 370/168 ... remapped to slightly faster chips ...
and as such ... the TLB had the same seven entry STO-stack. When using
dual-address space mode ... the increase in number of different address
space pointers was overruning the 3033 (seven entry) STO-stack and the
frequency of (implicit) TLB entry invalidations went way up ... to the
point that dual-address space was running slower than common segment
implementation.

Dual-address space mode was somewhat a subset retrofit of the 370-xa
multiple address spaces. The common segment problem on 370 was MVS
kernel was taking half the 16mbyte address space and common segment
started out taking only single mbyte segment. The common segment was to
address the pointer passing paradigm from MVT&SVS days for subsystems
... which had resided in the same address space as the application. With
the move to MVS, the subsystems were now in different address space
(from the calling applications) and broke the pointer passing API
paradigm. The solution was to have common segment that was the same in
applications and subsystems. The problem was common segment grew with
the subsystems installed and applications using subsystems ... and
larger installations had common segment area pushing over five mbytes
(threatening to leave only 2mbytes for applications use).

Burlington lab was large MVS shop with large chip design fortran
applications and a very carefully crafted MVS that held common segment
area to one mbyte ... so the applications still had seven
mbytes. However, with increases in chip complexity ... it was forcing
the fortran applications over seven mbytes ... threatening to convert
the whole place to vm/cms ... since the fortran applications under CMS
could get very nearly the whole 16mbytes.

basically SIE greatly expanded the architecture definition for virtual
machine mode ... as addition/alternative to real machine mode ... aka
principle of operations defines a lot of stuff about how things operate
in real machine mode ... virtual machine mode makes various changes
... like what are the rules for supervisor & problem state when running
in virtual machine mode; it greatly increased performance of running in
virtual machine mode (compared to full software simulation) ... modulo
3081 choosing to have the service processor page the SIE microcode on
3310 FBA device ... recent ref
http://www.garlic.com/~lynn/2010e.html#34 Need tool to zap core

now one of the big guest performance hits to vm370 was transition from
svs to mvs ... because the number of times that MVS changed CR1 exploded
(compared to svs) ... requiring vm370 to flush the shadow page tables
each time (virtual machine changed its "virtual" cr1).

now one of the things that could be done in a SIE scenario is to change
the operation of TLB miss/reload in virtual machine mode ... so that
hardware performed the double lookup on TLB miss ... eliminating the
requirement for having shadow tables (instead of having to maintain the
shadow table information ... which then is significantly amount of
overhead in flush scenario ... whether explicit via "virtual" PTLB or
implicit by change in "virtual" CR1 value).

As SIE began to blanket nearly ever aspect of machine operation
... with the virtual machine flavor ... it greatly simplified the
introduction of LPARs.

there use to be some SHARE thing about the greatly increasing MVS
bloat ... one was scenario about creeping processor bloat ...
possibility of waking up one day and MVS was consuming all processor
cycles with none left for applications.

This was somewhat the capture ratio scenario where the amount of
processor cycles even being accounted for, falling below 50%. The
san jose plant site highlighted a 70% capture ratio of MVS system
dedicated to apl ... but the apl subsystem was doing nearly everything
possible to avoid using any MVS system services as method of improving
thruput and performance. recent capture ratio mention
http://www.garlic.com/~lynn/2010e.html#33 SHAREWARE at Its Finest

The creeping bloat of common segment area size was similar ...
threatening to totally consume the remaining 16mbyte virtual address
space ... leaving nothing for applications to run. The dual-address
space introduction in 3033 was an attempt to at least slow down the
creeping common segment size bloat (threatening to totally consume all
available virtual address space).

from above:
In an explosive new book, Bernie Madoff whistleblower Harry Markopolos
tells the inside story of how he uncovered the $65 billion fraud,
claims that he exposed State Street's alleged fraud of pension funds
and admits that he considered the idea of killing Madoff.
... snip ...

Some of this was his testimony in congressional hearings last year
about trying for a decade to get SEC to do something about Madoff. His
concern for personal safety wasn't w/o reason. There was (at least)
case in the news of big FBI investigation into Internet IPO
manipulations ... when some investment banker was found on NJ(?)
mudflats.

He seems to be having a lot more fun on the book tour ... during the
congressional hearings, he sent his lawyer for TV interviews ... his
lawyer would say he still believed that he had a chance of not
surviving.

He was asked what he would do if he was put in charge of SEC ... and
he said that he would fire everybody currently there (and there still
has been no house cleaning). Something about nearly all are lawyers
and have no training in financial forensics (and the industry likes it
that way).

He has re-iterated several times that he believed that if Madoff had
known about him, Madoff would have sent people after him (and his only
choice was get to Madoff before Madoff got to him). Latest comment was
he believed that Madoff turned himself in to get in protective custody
... that Madoff had misappropriated some people's money that would
likely do to Madoff what the author believed Madoff would have done to
him (references to lot of people involved are sociopaths ... there are
other references to being totally amoral).

timothy.sipples@US.IBM.COM (Timothy Sipples) writes:
By the way, an awful lot of small businesses are opting for "Software as a
Service" offerings and choosing not to own or host their own servers, of
any type. If you want a zero-footprint z/OS machine -- that sure beats the
MP3000! -- it's available. To extend the above analogy, you can buy Fedex,
UPS, or USPS service and avoid renting, leasing, or owning your own trucks
or bicycles. If the world is already heading in that SaaS direction -- and
it sure looks that way -- then a z10 footprint makes even more sense.

a lot of the software as service ... and cloud computing ... very
analogous to oldtime online timesharing ... is partially being driven by
super-efficient megadatacenters (coupled with ubiquitous high-bandwidth
connectivity)

John Levine <johnl@iecc.com> writes:
The COBOL committee was in 1959, but it adapted several existing
languages, notably Grace Hopper's FLOW-MATIC which was working by
1958. There were several generations of Algol, producing reports in
1958, 1960, and 1968. Algol 60 is the best known, with working
compilers probably in 1961.

In May of 1967 an IBM System/360 model 67 replaced the IBM 7090,
Burroughs B5500 and DEC PDP-1 in Pine Hall. There is some question
about whether this machine was a model 65 or model 67, but John Sauter
remembers seeing the lights of the 'Blaauw Box', the dynamic address
translation module that is the difference between the models. Also,
Glen Herrmannsfeldt and John Ehrman remember that it was always
described as a model 67. However, despite the dynamic relocaton
capability, the model 67 was run as a model 65 using IBM's OS/360 MFT
operating system.

The original development of WYLBUR and ORVYL were done on the model
67. John Sauter remembers a flyer featuring cartoon personages named
Wylbur and Orvyl with the caption 'My brothers communicate'. MILTEN
was used to support remote users equipped with IBM 2741 terminals.
SPIRES was also originally written on the model 67. Nicklaus Wirth
developed PL360 and Algol W on the model 67; Algol W later evolved
into Pascal.

The IBM System/360 model 67 had 524,288 8-bit bytes of memory. It
could perform an add in 1.5 microseconds, a multiply in 6.
... snip ...

timothy.sipples@US.IBM.COM (Timothy Sipples) writes:
Agreed. There are a lot of similarities, but one difference is the ubiquity
of the Internet. It's really an accident of history (telco monopolies) that
the price-per-carried bit collapsed *after* the prices of CPU and storage
did. So we went through (suffered?) an intermediate phase when computing
architectures were principally constrained by high priced long distance
networking (the "PC revolution" and then "Client/Server"). It's interesting
viewing those phases through the rear view mirror. In many ways it's back
to the future now.

about telcos having very high fixed costs/expenses and significant
increase in available bandwdith with all the dark fiber in the ground
represented difficult chicken/egg obstacle (disruptive technology). The
bandwidth hungry applications wouldn't appear w/o significant drop in
use charges (but could still take a decade or more) ... and until the
bandwidth hungry applications appeared, any significant drop in the
useage charges would mean that they would operate deeply in the red
during the transition.

The communication group then did a corporate study that claimed that
there wouldn't be customer use of T1 until mid-90s (aka since they
didn't have product that supported T1, the study supported customers not
needing T1 for another decade).

The problem was that 37x5 boxes didn't have T1 support ... and so what
the communication group studied was "fat pipes" ... support for being
able to operate multiple 56kbit links as single unit. For their T1
conclusions they plotted the number of "fat pipes" with 2, 3, 4, ...,
etc 56kbit links. They found that number of "fat pipes" dropped off
significantly at four or five 56kbit links and there were none above
six.

There is always the phrase about statistics lie ... well, what the
communication group didn't appear to realize was that most telcos had
tariff cross-over about five or six 56kbit links being about the same as
a single T1 link. What they were seeing, was when customer requirement
reached five 56kbit links ... the customers were moving to single T1
link supported by other vendors products (which was the reason for no
"fat pipes" above six).

The communication groups products were very oriented towards to the
legacy dumb terminal paradigm ... and not the emerging peer-to-peer
networking operation. In any case, a very quick, trivial survey by HSDT
turned up 200 customers with T1 links (as counter to the communication
group survey that customers wouldn't be using T1s until mid-90s
... because they couldn't find any "fat pipes" with more than six 56kbit
links).

this is analogous to communication group defining T1 as "very high
speed" in the same period (in part because their products didn't support
T1) ... mentioned in this post:
http://www.garlic.com/~lynn/2010e.html#11 Crazed idea: SDSF for z/Linux

the various internal politics all contributed to not letting us bid on
the NSFNET backbone RFP ... even when the director of NSF wrote a letter
to corporation ... and there were observations that what we already had
running was at least five years ahead of RFP bid responses (to build
something new). misc. old NSFNET related email from the period
http://www.garlic.com/~lynn/lhwemail.html#nsfnet

gahenke@GMAIL.COM (George Henke) writes:
Actually a previous client, a large Wall St investment house that survived
the recent crisis, has sooooooooo many blade servers in its data center they
can't fit anymore in. So they have a pilot project to bring them up on
LINUX under z/VM.

the claim was that in the 90s ... to have multiple applications co-exist
in the same operating system required scarce, high-level skills
(allocation, co-existance, capacity planning, etc) ... it was much
easier and cheaper to throw hardware at the problem ... giving each
application (and even application instance) its own dedicated hardware.

rolling forward to couple years ago and organizations found themselves
with thousands, tens of thousands, or even hundreds of thousands of
these dedicated services ... all running at 5-10% utilization.

virtualization, dynamic load balancing and some other technologies came
together to support server consolidation (sometimes 10 or 20 to one
running on essentially the identical hardware). part of the issue was
that there was only very modest incremental skill level required for
server consolidation (as compared to trying to getting lots of disparent
applications to co-exist in the same system).

lots of technologies are being pump into virtualization environment
... like dynamic migration of virtual machine to different hardware
(even in different datacenters) for capacity reasons and/or continuous
operation reasons.

I can vaguely remember WYLBUR at a service bureau in the 1970's, we
were told it had less machine cost than TSO, it was the first time I
had my own terminal but I don't remember WYLBUR being used in very
imaginative ways, mostly it just avoided the unit record equipment but
even end users still had to know a little about JCL.

and "technically" how could there be reference to wylbur when wylbur
didn't run applications. the referenced terminal input/output is
assembler subroutine (for PLI adventure) ... that uses TPUT/TGET macros.
CMS had simulation for TSO TPUT/TGET (and the specific source actually
comes from a cms adventure pli distribution) ... so i'm guessing that
orvyl may have also had TSO TPUT/TGET macro support.

There are also web references to CERN getting WYLBUR/ORVYL from SLAC
... and that WYLBUR had been done at SLAC (not sure whether they didn't
differentiate between SLAC and stanford ... or just didn't know). SLAC
and CERN were somewhat "sister" operations ... sharing a lot of
dataprocessing applications. The CERN documents also have CERN porting
WYLBUR editor to non-IBM platforms (or at least re-implementing an
editor with the same syntax) ... aka

ps2os2@YAHOO.COM (Ed Gould) writes:
In the mid 70's we had a T1 and we muxed it and IIRC we had 1 256K
chunk and another chunk (sorry do not remember the speed) connected up
to our 3745 and it worked really well (except a really strange bug
which took us with the help of chance to figure out what the issue
was). We were exercising it and kept it busy at least 20 out of 24
hours a day. I vaguely remember talking about the bug with IBM at the
time (we were a small minority user of something like this at the time
as IBM apparently only had a few people that seemed to know this part
of NCP). Its not too surprising I guess that IBM really did not
support a full T1 but if my memory (its iffy here) is correct it had
something to do with the speed of the 3745 as to why IBM couldn't
support it. SInce memory fades with time and I only remember small
pieces we did seem to be on the bleeding edge at that time.

in mid-80s, Laguade had an experimental 3725 that was dedicated to
running single T1.

corporation did have 2701 in the 60s that supported T1 ... but the
communication group in the 70s acquired increasingly narrow myopic focus
on dumb terminals; they also leveraged corporate politics to keep other
business units out of areas that thot even remotely touched on what they
believed was their responsibility.

this shows up, at least in the constant battles my wife had with the
communication group when she was in POK responsible for loosely-coupled
architecture ... and only temporary truces that she could use her own
protocol for machine-to-machine communication within datacenter
walls. some past posts mentioning her peer-coupled shared data
architecture ... which except for IMS hot-standby, saw little uptake
until sysplex.
http://www.garlic.com/~lynn/submain.html#shareddata

(20yr old) 2701s were becoming increasingly long in the tooth during
this period. in the mid-80s, federal systems division did come up
with zirpel card for S/1 that supported T1 (for special gov. bids).

however, for the most part, if other business units couldn't be kept out
purely using the argument that only communication group could produce
communication related products ... then there was always studies like
the "fat pipe" argument that would be presented to corporate hdqtrs
... showing that customers didn't even want such products.

it was also motivation for senior engineer from disk division getting
presentation slippped into the internal world-wide communication group
annual conference ... where the opening statement was that the
communication group was going to be responsible for the demise of the
disk division.

I got HSDT involved with babybell that had done NCP emulation on
series/1 ... and I was deep into trying to put it out as a corporate
product (and really got involved in interesting politics ... this is
scenario where the truth is really stranger than fiction). In any case,
I gave a presentation on the work at fall '86, SNA architecture review
board meeting in Raleigh ... quickly putting out a series/1 based
version while quickly porting to RIOS chip (aka rs/6000).

the 3725 pieces of the numbers came from official corporate HONE
configurator (sales & marketing use for selling to customers) ... part
of the presentation to fall '86 SNA architecture review board meeting in
Raleigh
http://www.garlic.com/~lynn/99.html#67 System/1 ?