One of the reasons we were always happy to pay to get a Wheeler
scheduler, beginning way back in the PRPQ days, was that it did
such a good job of protecting other users from a CPU hog.

Indeed, several times a year we would have a user panic because
he had just discovered that his computer account was overdrawn
by several thousand dollars. The scenario was always the same.
He had invoked a program or EXEC he was working on; his terminal
had gone dead, so he had gone home for the night. A couple of
days later, he tried to logon again, found himself still logged
on, and asked the operators to force him. That's when he found
he had no money left. Then he would come to us. We'd tell him
about loops, ask him not to do that again, and give him his
money back.

The interesting part of all this is that the Wheeler Scheduler
had been doing such a good job of protecting the system from
the looping user, that nobody had noticed him. The scheduler
just kept him in the background absorbing the spare cycles, but
didn't let him use the cycles somebody else wanted.

This is not at all the way the HPO 3.4 scheduler works, however.
In the year we've been running it, we have seen numerous cases in
which one or two heavy CPU users severely degraded the performance
of the entire system.

These people are not paging heavily and are not doing a lot of
I/O. (VM has never done a real good job of containing users who
put excessive loads on memory/paging or I/O.) They are using
CPU only and generally have very small working sets. Typically,
their TVRATIO's are 1.0.

And the HPO 3.4 scheduler lets a single such user have as much as
90% of one processor in the middle of the afternoon, when there
are plenty of other users who need (and deserve) some of those
cycles.

I'm rather at a loss to figure out how to approach IBM on this
problem. I don't want to be told that the scheduler is working
as designed. Does anybody have any suggestions? Also, do other
people see this problem?

after leaving cambridge for san jose research (later part os 70s) ... I
did a lot of work on heavy i/o users ... "scheduling to the bottleneck"
... but none of that work was shipped in product. some recent posts
mentioning systems becoming more & more i/o constrained (as improvement
in disk performance lagged other system components)
http://www.garlic.com/~lynn/2011.html#35 CKD DASD
http://www.garlic.com/~lynn/2011.html#61 Speed of Old Hard Disks

Hi, Lynn, Sorry not to have gotten back to you sooner -- we were out
of town when your mail came.

The Cyber 205 is not yet at the John Von Neumann Center here, because
the Consortium's building is not yet finished. It is running
Consortium work, but is still at CDC in Arden Hills, Minnesota. It is
scheduled to be moved to Princeton in June.

In the meantime, two VAX 8600's that will be its frontends are sitting
in the PUCC machine room, and the supercomputer staff is working on
getting the software for them up. It is hoped that by May there will
be a configuration in place that will allow people to start testing
out their communications.

Right now, the 205 and a pair of 8600's are at Arden Hills. The
8600's in the PUCC machine room aren't really talking to anything
else. The VAX750 in the PUCC machine room has a (not yet stable)
Ethernet link to the PUCC 3081, but is not communicating with the
8600's yet.

In May, it is hoped that there will be a configuration like the one
above except that the 205 will still be in Arden Hills and there will
be a 56kb line from the PUCC VAX750 to the 8600's in Arden Hills. The
central Ethernet in the diagram above will be in place. Thus, all the
Consortium members should be able to get to the 205 via the PUCC VAX.
All of the communications will be TCP/IP.

As far as we know, there are no plans for the 205 to become a BITNET
node. There is a physical Ethernet link, but no BITNET-to-Ethernet
crossover. One can now logon to a 3081 userid and transfer files
across the Ethernet to the VAX750. If it would be of any use to you,
you are most welcome to an account on the 3081.

The Computer Center is not directly involved in the supercomputer
project. We are working to get all the machines that use our local
area network communicating with the 750, so that all of our users will
be able to access the 205 someday, but that's the extent of our
involvement.

Thus, what I've been telling you was learned at second hand. We would
strongly recommend that you get in touch with Ira Fuchs, our Vice
President for Computing. He has been lent to the Consortium half time
to try to get their communications going and will have much more
precise and current information than I have. Ira is also a co-author
of the "Computer Networking for Scientists" article in Science (28
February 1986, pp. 943-950) that described NSFnet. Ira's BITNET
address is FUCHS@PUCC.

Thank you very much for your recent notes about hungusers. I am still
struggling with the problem.

from long ago and far away ... TYMSHARE is being bought by M/D. I got
brought in to evaluate TYMSHARE's GNOSIS as part of its spin-off as
KEYKOS. Also got some IBM interviews for Doug Engelbart (who was at
TYMSHARE at the time).

Lynn, as you may have heard, we are about to move VMSHARE and
PCSHARE from TYMSHARE to McGill University. So that the you and
the McGill people won't have Customs hassles, it's been decided
that I will make your monthly tapes of the conferences from the
copy of the databases that I keep on my system. So, I need a
mailing address for you.

Do you need/want the index files? There is a public domain
program (written by Arty Ecock, of CUNY) that can be used to
search the indices without requiring the conferencing system.

original cp67 kernel (brought out to univ. jan68) was more than box
... but fit in tray.

at that time (jan68), cp67 group didn't quite trust the cms filesystem.
distribution was cp67 source (& assembled source "txt" decks) on OS/360
tape ... and assembled on os/360. punch the individual module "txt"
decks, use a magic marker to draw diagonal stripe across the top of each
deck and write the module name. individual txt decks were arrainged in
tray in appropriate order and a "BPS" loader was placed at the front of
the whole thing.

load the whole deck in 2540 card reader, dial in the card reader on the
front console and hit the "IPL" button. BPS loader would read all the
cards into memory and transfer to the appropriate place ... "LDT" card
pointing to savecp. savecp would find the correct disk location and
write the memory image to disk with appropriate IPL records. Then cp67
system could be booted/IPLed from disk.

Source changes could be made to individual module assemble file, that
module re-assembled and the assemble output TXT deck punched. Repeat
the operation with the magic marker with diagonal stripe on module name
on the top of the deck ... find the corresponding own deck in the card
tray (from information written across the top of each deck) and replace
the cards.

Other approach is to have all the TXT decks and write the BPS loader and
the TXT deck images to tape ... IPL the tape (instead of 2540 card
readere) to create a new cp67 kernel.

A little later in the year, the cp67 group grew confident enough in the
CMS filesystem ... that cp67 source was moved to CMS ... and started
using the CMS UPDATE command for source updates ... instead of directly
editing the base assemble source, edit an "UPDATE" file ... and use the
UPDATE command to change the assemble source creating a
temporary/working assemble file, which was then assembled.

An update deck would have "./" control statements that replaced,
inserted, deleted records in the original. It used sequence numbers in
cols. 73-80 of the original source. The replace/inserted new source
(from update deck) had to be completely generated manually (included the
"new" sequence numbers in cols. 73-80). I was making so many source code
changes to CP67 that I wrote a preprocessor to the update command
... which added a "$" to the end of the "./" control statements ...
that defined the automatic generation of sequence numbers in the
new/replaced source files. The preprocssor would read the new "update"
file ... process any "$" fields appropriately generating the sequence
numbers ... output to a temporary "update" file which was then feed to
the UPDATE command ... which generated a temporary "assemble" file that
was fed to the assembler.

Later, after joining the science center there was multi-level source
update procedure created. Update files now had "prefix" filetype of
"UPDG" (with remaining four characters in filetype used for specifying
an update level). The "$" preprocessor was invoked which generated a
"UPDT" temporary file (with the same suffix) ... which was then applied
to assemble source to generate a temporary assemble file. This process
then iterated (for each update file for that particular source routine)
... but with UPDATE applying the source update to the most recently
generated temporary assembler file (output then replaced the temporary
assembler file).

Eventually the CMS editor was enhanced to have an "update" mode; rather
than directly creating the UPDG file ... it was possible to edit a
source file ... and the CMS editor would "save" all source changes made
as an update file.

Lynn, I was truly touched by your having spent part of your Saturday
morning loading up those CP-67 EXECs for me. It was extraordinarily
thoughtful of you and has helped me answer almost all of my questions
about the CP-67 implementation.

I have been working my way through the EXECs and believe that I have
them all deciphered now. I was somewhat surprised to see how much of
the function was already in place by the Summer of 1970. In
particular, I hadn't expected to find that the update logs were being
put at the beginning of the textfiles. That has always seemed to me
to be one of the most ingenious aspects of the entire scheme, so I
wouldn't have been surprised if it hadn't been thought of right away.
One thing I can't determine from reading the EXECs is whether the
loader was including those update logs in the loadmaps. Do you
recall?

Of the function that we now associate with the CTL option of UPDATE,
the only substantial piece I see no sign of in those EXECs is the use
of auxfiles. Even in the UPAX EXEC from late January, 1971, it is
clear that all of the updates listed in the control files were
expected to be simple updates, rather than auxfiles. I know, however,
that auxfiles were fully implemented by VM/370 Release 1. I have a
First Edition of the "VM/370 Command Language User's Guide" (November,
1972) that describes them. The control file syntax at that point was

the standard new cp67 process involved generating a new "kernel" card
deck to tape. I enhanced the process to also add additional files on the
tape (behind the bootable card deck image) ... everything that was
needed to generate that card deck image ... all the source, all the
source changes ... all the processes and procedures.

I periodically archives some of these tapes ... and besides straight
backup/archive tapes ... I took several of these tapes when I
transferred to SJR on the west coast. Over the years, I would copy the
tapes to newer technology (800bpi to 6250bpi, etc). It was from these
tapes that I retrieved a lot of stuff for Melinda ... and it were these
tapes (even some replicated) that was "lost" in the Almaden tape library
operations glitch.

--
virtualization experience starting Jan1968, online at home since Mar1970

Lynn, I am working with XXXXXX to respond to Princeton/Wisconsin/
Delaware request for equipment with which to experiment with a micro
backbone for BITNET for access to supercomputers. It sounds like what
you mentioned to YYYYYY recently that you are proposing to NSF.

What is your proposal? Who are you working with in ACIS HQ? Who are
you working with in NSF? NSF is granting 500K to the above schools for
1 year to do the work. ACIS is trying to respond ASAP to Princton's
request for equipment. Please give me a call and let's discuss what
you are doing. I have a lot more confidence in your plan than in what
these 3 schools are trying to accomplish over the next year. They are
expecting the money momentarily but have no firm plan of who is doing
what to whom.

Its good to talk with you again. Looking forward to talking with you
on phone.

re: foils; fyi; hsdt009 is what has been given to NSF to tie together
all the super computer centers. Have given it to UofC system to tie
together all their campuses. Will be giving it to national center for
atmospheric research (NCAR) in boulder on monday.

re: hsdt; thot you may be interested in some of the stuff. will
probably be over in europe last week in june thru most of july. Have
pitched hsdt to NSF for tieing together all the super computer centers
and have gotten very favorable response (bandwidth is about 20* the
alternatives for similar costs). Have &/will be pitching to several
research/university systems. Have pitched to the Univ. of Cal.
systems, will be pitching it to national center for atmospheric
research before heading to toronto to give pam pitch at the interim
share.

Plan on pitching to the european univ. network when I'm over and
several other organizations.

re: rmn; looks like very good possibility of building a 4 processor
proto-type using A74s (about 350kips) next year ... romans probably
aren't going to be available until late 87 unless we build a fire
under them and accelerate the schedule. Have pitched it to both scd
and spd executives.

re: network; i would like to place hsdt009 script on the network disk
... it is foils which describe the high-speed data transport adtech
project. Unfortunately, it can't be labeled ibm internal use only
since the foils are part of information that is being presented
outside of ibm ... including (among others) NSF as means for tieing
together all the super computer centers, Univ. of Cal. for tieing
together all campuses, NCAR for connecting 20+ universities who are
using the national center for atmospheric research, etc.

In fact, NSF has expressed interest in actively participating in the
adtech project. There is also some number of other documentation files
which (among other things) contains a detailed analysis of several
RSCS performance bottlenecks and proposed solutions (although some of
the information has already been extracted in forums on VMPCD and
IBMVM).

folklore is that when the executive committee was told about computer
conferencing (and the internal network), 5of6 wanted to fire me.
There were then a lot of corporate investigation and taskforces
looking at the phenomena ... one of the result was "official" support
for (VM-based) computer conferencing. The first such conference was
"IBMVM" ... but eventually others were spawned by different
organizations on number of subjects. VMPCD was a VM performance
subject sponsored by Endicott. There was also a networking online
conferences (NETWORK, NETWRKNG, etc) sponsored by the communication
group.

Some had requirement that topics were classified "Internal Use Only"
(or sometimes higher) ... however, couldn't really classify
presentations to external organizations as "Internal Use Only".

from above:
IBM tried to react by launching a major project called the 'Future
System' (FS) in the early 1970's. The idea was to get so far ahead that
the competition would never be able to keep up, and to have such a high
level of integration that it would be impossible for competitors to
follow a compatible niche strategy. However, the project failed because
the objectives were too ambitious for the available technology. Many of
the ideas that were developed were nevertheless adapted for later
generations. Once IBM had acknowledged this failure, it launched its
'box strategy', which called for competitiveness with all the different
types of compatible sub-systems. But this proved to be difficult because
of IBM's cost structure and its R&D spending, and the strategy only
resulted in a partial narrowing of the price gap between IBM and its
rivals.

... snip ...

Other references are that during the FS period ... all sorts of internal
efforts (viewed as possibly competitive) were killed off ... including
370 hardware&software products (since FS was going to completely replace
360/370) ... which allowed (370) clone processor vendors to gain market
foothold. Then, after the FS demise there was mad rush to replenish the
370 software&hardware product pipeline. misc. past posts mentioning
FS
http://www.garlic.com/~lynn/submain.html#futuresys

There have been a number of articles that the corporation lived under
the dark shadow of the FS failure for decades (deeply affecting its
internal culture).

The corporation had a large number of different microprocessors
... developed for controllers, engines used in low-end & mid-range 370s,
various other machines (series/1, 8100, system/7, etc). In the late 70s
there was an effort to converge all of these microprocessors to 801. In
the early 80s, several of these efforts floundered and some number of
the engineers leave and show up on risc efforts at other vendors.

There is folklore that after FS demse, some number of participants
retreated to Rochester and did the S/38 with some number of FS features.
Then the S/38 follow-on (AS/400) was one of the efforts that was to have
one of these 801 micro-engines. That effort floundered (also) and there
was a quick effort to do a CISC engine. Then a decade later, AS/400
finally did migrate to 801 (power/pc).

There was a presentation by the i432 group at annual Asilomar SIGOPS
... which claimed a major problem with i432 was it was a) complex and b)
silicon; all "fixes" required brand new silicon.

I had done a multiprocessor machine design in the mid-70s (never
announced or shipped) that was basically 370 with some advanced features
somewhat akin to some of the things in i432 ... but it was a heavily
microcoded engines ... and fixes were new microcode floppy disk.

--
virtualization experience starting Jan1968, online at home since Mar1970

My first exposure to a PDP-10 was in 1977. Our university had an
arrangement with the UT Medical Center to get time on the MCRC PDP-10
(MCRC = Medical Computing Resource Center). The recommended max number
of users on the system was 64, yet regularly there were *80* users on
the system all day long. And response time was *dreadful*!!!

i got sucked into an academic dispute about local versis global LRU
replacement algorithm ... i had done global LRU for cp67 as
undergraduate in the 60s ... about time there was a lot of local lru
going on in academia.

more than decade later somebody at stanford was doing phd on global LRU
... and there was lots of resistance from some factions of the academic
community to awarding the phd. at asilomar sigops (14-16dec81) ... jim
gray asked me if i could provide some input (phd candidate was co-worker
at tandem).

i was having my own problems ... having been blamed for online computer
conferencing on the internal network in the late 70s & early 80s ... i
was under all sort of restrictions ... took me almost a year to get
approval to respond to jim's request ... even tho it primarily invovled
work i had done as undergraduate in the 60s. response that i was allowed
to send ... nearly year after original request
http://www.garlic.com/~lynn/2006w.html#email821019in this post
http://www.garlic.com/~lynn/2006w.html#46 The Future of CPUs: What's After Multi-Core?

part of the response involved pointing to some work done on cp67 at the
grenoble science center in the ealry 70s ... for a local LRU ... and
published in cacm. at the time, grenoble had 1mbyte real storage (155
pageable pages after fixed storage requirements) 360/67 running cp67
with 35 users and subsecond trivial response. Cambridge science center
had very similar cp67 with very similar cms workload on 768kbyte (104
pageable pages after fixed storage requirements) with my global LRU
implementation ... and had similar subsecond response running 70-80
users (grenoble had 50percent more pageable pages and half the users).

note that IBM nearly bet the company again a decade later with "Future
System" (comments that if any other vendor had a failure the magnitude
of FS, it would no longer be in business). misc. past posts mentioning
FS
http://www.garlic.com/~lynn/submain.html#futuresys

I may have established the tone for the rest of my career at the company
by ridiculing the FS effort ... drawing comparisons with cult movie that
had been playing down at central sq. (and claiming some stuff I already
had running was better than what they were blue skying in various
vaporware documents).

The first personal computer (PC)

Joe Thompson <spam+@orion-com.com> writes:
I don't know if I'd take that at face value. Phone-spamming numbers on
the DNC list sounds like a guaranteed way to just piss off a lot of
people and get no sales out of it. He'd do better just calling every
number in the phone book, and he probably knows that. -- Joe

there was earlier discussion where I claimed that I started getting an
extremely large uptick in political soliciting calls after registering
for DNC (high corrolation with using do-not-call as calling list,
which they had exempted in the legislation) ... past posts here in
a.f.c.
http://www.garlic.com/~lynn/2008m.html#73 Blinkylights
http://www.garlic.com/~lynn/2009b.html#47 How to defeat new telemarketing tactic

--
virtualization experience starting Jan1968, online at home since Mar1970

The first personal computer (PC)

tony cooper <tony_cooper213@earthlink.net> writes:
I have no source or data to back this up, but I think that most US
owners of mobile phones have a plan that includes a certain number of
minutes (incoming and/or outgoing calls). The per-minute charge for
minutes *over* the plan is usually exorbitant, but plans can be
adjusted.

we have plan with fixed number of "call" minutes ... but per charge on
non-voice minutes (text). we started getting so many spam text messages
... which show up on the bill ... finally had to ask service provider to
block incoming text messages. they claimed only option/feature they
supported was to turn off all incoming and outgoing non-voice calls
(which we finally had to do).

we still get some SPAM voice calls ... at least some seem to originate
from call center outside US ... which may be beyond the do-not-spam
legislation. they may also be leveraging VOIP to minimize their costs.

--
virtualization experience starting Jan1968, online at home since Mar1970

Future System descriptions talk about it having one-level-store design
... but doesn't mention that large sections of the design/architecture
was vaporware ... a lot of description that managed to have very
little actual content.

I had watched the effort over the years trying to get TSS/360 up (with
its one-level-store) and running on on the univ. 360/67 (tss/360 was
the "official" virtual memory operating system for the 360/67) ... and
after getting cp67/cms being able to run massive rings around tss/360
(on effectively identical benchmarks, especially after rewriting a
lot of the cp/67 code).

While TSS/360 had done a much better job for address constants in
application execution images (compared to os/360 ... which was used
heavily by cms) ... there was still large portions of tss/360
one-level-store that was poorly implemented ... especially from
thruput standpoint ... somewhat analogous to recent global vis-a-vis
local LRU ... recent discussion/post:
http://www.garlic.com/~lynn/2011c.html#8 The first personal computer (PC)

In any case, the Future System one-level-store appeared to have been
heavily influenced by the TSS/360 effort ... *AND* have learned
nothing from that effort. After joining the science center and doing
paged mapped filesystem for cp67/cms ... I tried to avoid a whole
bunch of things that I saw done wrong in TSS/360. misc. past posts
mentioning paged mapped filesystem
http://www.garlic.com/~lynn/submain.html#mmap

... some of it in conjunction with what was to become NSFNET backbone
... as well as processor cluster stuff (I had to get a substitute to
present to director NSF on the HSDT stuff ... because I got preempted
for processor cluster meeting in YKT).

However, there was lots of internal political pressure that prevented
us from bidding on NSFNET backbone (overcoming even lobbying by
director of NSF ... and statements that what HSDT already had running
was at least five yrs ahead of all bid submssions for NSFNET
backbone). Past post
http://www.garlic.com/~lynn/2006w.html#21 SNA/VTAM for NSFNET

The first personal computer (PC)

greenaum@yahoo.co.uk (greenaum) writes:
I can see the point of that for running massive number-crunching,
finite element analysis, or whatever it's called now. Surprised it's
taken so long, you'd think labs would have been sharing since the
Internet got wide enough 10 or more years ago. Perhaps it's pride.
They don't want anyone using their lovely shiny supercomputer, and
using some smelly computer miles away would be admitting inferiority.

Google, a few years ago, started selling teraflops (exaflops?) is
shipping containers. Unload the container in the yard, plug the
3-phase in, and you've got a supercomputer for as long as you need it.

This might help out the environmental problem that computing's now
causing. Everyone do the same, and ship their comptainers near a
hydroelectric dam somewhere. Perhaps ship each one with a windmill
generator. Caretaking one would be nice. Nice scenery, interesting
job, something for the cyber-hermits of the future, or a nice
retirement job.

Quadibloc <jsavard@ecn.ab.ca> writes:
Of course, they ended up rolling a lot of FS in the AS/400. It
depends, too, on the value of "nearly". Yes, they spent a lot of money
on that development project. But companies that never spend money on
research end up disappearing from the market.

it went into S/38 ... one of the issues was that a lot of shortcuts and
lack of thruput that was critical at the high-end ... was much less of
an issue at the S/38 end of the market.

one of the s/38 simplifications was that all disks were created as
common pool with record allocation being scatter allocation across the
whole disk pool. this had downside on both thruput and availability
(somewhat referred to in followup post regarding one-level-store from
tss/360 and apparently neither FS nor S/38 learned anything from their
experience; and work i did on page mapped filesystem).

and one of the engineers there got a patent in the 70s ... on what would
come to be referred to as RAID.

S/38 problem with common pool ... was the whole, complete infrastructure
had to be backed up as a single entity (all disks) ... and then if any
single disk failed ... a whole complete restore had to take place (in
some cases claimed to take day or more).

In any case, that single disk failure vulnerability ... taking out the
whole infrastructure and requiring whole infrastructure restore ... is
claimed to be motivator for S/38 being early RAID adopter (mask single
disk failure ... since scatter allocation resulted in single failure
taking out whole infrastructure ... and doesn't scale at all).

In comp.arch risc/cisc reference ... there is discussion that as/400 is
the s/38 followon and as/400 was originally going to be one of the
801/risc implementations (corporate effort from late 70s to converge the
large number of different microprocessors to common 801 platform) ...
effectively all floundered and as/400 had crash program to do a cisc
chip (although as/400 eventually did move to 801 power/pc a decade
later).

with trivia that my brother was regional marketing rep for apple
(largest physical area in conus) ... and worked out being able to dial
into apple corporate hdqtrs to check on build&ship schedules ... which
turned out to be a s/38.

I am running an ad hoc group here in Rochester trying to incorporate
a network/broadband technology into our 1Q88 processor (a new product
line which will merge the S/36 and S/38 products). We are involved in
the an effort (from Yorktown Research). Our intent is to use
network/broadbands to move our licensed products from the PIDs of the
world to the customer. Likewise, I would like to offer
network/broadband distribution as a functional offering to our
customer set for data networking capability. I am trying to track down
every person I can find who is doing network/broadband work so that I
don't reinvent any wheels and also so that I don't blindly head down
the wrong path. xxxxxx in Corporate Internal Telecommunications told
me that you were involved in network/broadband work in some
fashion. Can you enlighten me and perhaps give me more to go on?

French PTT currently has 5 25-watt transponders that it is planning
on using for data in a TDMA system (on TELECOM1). They plan on
putting up an additional satellite that is supposedly all data.
La Gaude lab. is supposedly doing a beta test of the service to
the location in Paris. French PTT plans on providing to the customer
over-night changes in bandwidth allocation and/or possibly dial-up
(overnight reserved) connections (although the TDMA time-plan change
actually only takes a couple of minutes).

La Gaude lab. is using a 2 meg. 3275 with T1 adapter for doing the
beta test. They have almost completed the test except for some
interactive testing scheduled for late next month. Supposedly with
appropriate "tuning" of the NCP parameters they have gotten up to
95% efficency when trasmitting data in one direction at speeds varying
between 9.6kb and 1.5mb. They have also done some tests in two
directions and have tested an agreegate thru-put of the 3275 at
about 1.7mbit to 1.8mbit (i.e. total number of bits going in both
directions ... i.e. 3725 can't support full-duplex 1.5mbit).

Tests are on 3083, MVS/VTAM, and 3275s with identical configurations
at both La Gaude and Paris. Interactive tests and report should
be done some time in Sept.

That summer was extended whirlwind tour of numerous places in Europe
(including La Gaude lab). 3725 was biggest and fastest ... and
full-duplex T1 (1.5mbit) would be 1.5mbit concurrently in both
directions, aka 3mbit/sec aggregate (European "T1" is 2mbit/sec
full-duplex or 4mbit/sec aggregate; the 1.7mbit limit wouldn't even
handle single direction European "T1").

Possibly because of the limitation (and in support of various SNA/VTAM
misinformation), the communication group did a report for corporate
executives that customers wouldn't be needing T1 before sometime well
into the 90s. 37xx controllers supported something called a fat pipe
which simulate the operation of a single link with a group of parallel
56kbit links. They had a survey of fat pipe use and customers and
found some number of two link fat pipes with declining customers as
number of parallel links increased until five links and find nearly no
customer use of fat pipes with more than five 56kbit/sec links
(280kbit/sec aggregate, full-duplex would be 560kbit/sec).

What they overlooked (possibly purposefully) was that a) somewhere
around 5 or 6 56kbit links ... had the same aggregate cost/tariff as a
single T1 (1.5mbit) link and b) trivial survey at the same time, turned
up 200 mainframe customers with T1 links using non-IBM controller
products (the fat pipe survey possibly self justifying because didn't
have products that could support full T1).

hancock4 writes:
Some (much?) of Future System found its way into the S/3x systems.
Would you know how much? That is, were they able to recover some of
the investment in FS by utilizing some of it in other products?

As a mainframer, one of things I was uncomfortable with about the AS/
400 was the higher level of independence from the physical machine.
That would be nice if the machine had very high memory and CPU
resources, but at that time the AS/400 was still a mini-computer with
physical limits.

one of the final nails in the FS coffin was some analysis by the
Houston Scientific Center

Eastern ran its System One, ACP airline res system on 370/195. Houston
Scientific Center did some FS analysis claiming that if ACP was run on
a FS machine built out of the fastest then available circuitry
(370/195), it would have the throughput of 370/145 (factor of 20-30
times slowdown). This had to do with the high level hardware
abstraction and multiple levels of indirection.

the above references that 3081 was built using FS hardware with 370
microcode ... and that there was enormous amount of circuitry
(increasing manufacturing cost) and significantly slow for the amount
of circuitry (compared to clone processor competition).

one of the above has old email mentioning that ACP running on 3081D
(just using one of the processors since ACP didn't have multiprocessor
support), was 20% slower than running on 3033. Now 3033 was claimed to
be 4.5mip machine (50% faster than 3mip 168-3) and each 3081D
processor was claimed to be 5mip. Later 3081K was tested, (essentially
double cache size of 3081D) claiming each processor was 7mip, but ACP
ran approx. same speed on one 3081K processor as on 3033 (again 2nd
3081 processor was idle/unused).

in any case, at the low/entry business computer level ... s/38 could get
away with the various throughput issues.

as mentioned in the posts and references ... initial AS/400 was designed
to be converged s/36 & s/38 (using rapidly designed cisc processor ...
after the 801/risc/iliad effort floundered). A decade later AS/400 was
migrated to 801/risc (power/pc).

--
virtualization experience starting Jan1968, online at home since Mar1970

If IBM Hadn't Bet the Company

Quadibloc <jsavard@ecn.ab.ca> writes:
I think the days are numbered, though... and like Unisys, IBM will
eventually have to throw in the towel, and use x86 for everything. At
least now that there's Nehalem-EX, they won't have to give up RAS to
do so. (I think Unisys went to Itanium instead of x86 for just that
reason - to get RAS - but I forget the details.)

Charles Richmond <frizzle@tx.rr.com> writes:
Sometimes the better part of valor is just to keep your mouth shut!!!
:-) I learned that the hard way, like I learn most things I
guess. Sometimes I thought I was the only one seeing a certain thing,
and I would point the thing out to everyone. Then I found out that
*everyone* saw it, but had the good sense to keep quiet about it.
:-(

There was another episode a couple years later ... I was in the process
of shipping my resource manager (much of it was stuff from cp67 that had
been dropped in the initial morph to vm370) ... discussed in some more
detail in this recent long-winded post (linkedin z/VM group)
http://www.garlic.com/~lynn/2011b.html#61 VM13025 ... zombie/hung users

there was one of the biggest, "true blue" commercial accounts not far
from boston ... which I periodically drop by ... I knew several of the
local branch office people as well as people on the account. About the
time of the "resource manager" ... the branch manager had done/said
something that had horribly offended the customer. In response, the
customer was going to be the first "true blue" account to install a
large clone processor (there had been several installs at educational
accounts, but so far none at big commercial "true blue" accounts).

I was asked to go sit onsite at the customer account for six months
... appearing as if I was convincing the customer that IBM was better
than the clone competition. I was familiar with the situation and knew
that the customer was going to install the clone processor regardless of
anything I did (it would go into a huge datacenter and might even be
difficult to find in the wash of all the "blue processors").

I was told that I needed to do it for the CEO, the local branch
manager was his good friend (and crewed on the CEO's sailboat) ... and
being the first with a clone processor to blemish his record would
taint his career forever. My presence was needed to try and obfuscate
it being a technical issue ... and direct attention away from the
branch manager. I was finally told if I didn't do it, I wouldn't have
a career and could say goodby to promotions and raises (wasn't a team
player to *NOT* take the bullet for the branch manager).

--
virtualization experience starting Jan1968, online at home since Mar1970

starting in the late 70s and extending through the first half of the
80s ... 43xx/mid-range was showing up with clusters (besting 3033 &
3081 in aggregate performance & price/performance) as well as leading
edge of distributed computing (large customers with 43xx orders in the
several hundred at a time). Also, 43xx (& vax) had dropped
price/performance in the mid-range market below some threshold and
they were selling in new markets (large number of one or two machine
orders). by the mid-80s both the 43xx & vax numbers were starting to
drop off as workstations & large PCs was starting to take over those
markets (cluster, distributed, and single new business). some old
43xx email
http://www.garlic.com/~lynn/lhwemail.html#43xxold post with decade of vax sales, sliced in diced in various ways:
http://www.garlic.com/~lynn/2002f.html#0 Computers in Science Fiction

It was in the mid-80s when the corporate executives started predicting
that company gross would be doubling from $60B to $120B ... and started
big expansion of mainframe manufacturing capacity (including the
enormous bldg. 50 on the san jose plant site to "double" disk
manufacturing). However, at the same time there was enormous amount of
information that computing was becoming increasingly commoditized
(cluster, distributed, moving into lower-end ... and moving to
workstations and large PCs) ... and the mainframe business was heading
in exactly the opposite direction (as predicted by top executives). It
was relatively trivial to show spreadsheet that the company was heading
into the red (didn't seem to matter since had already been told that
didn't have career)

oh, accelerating the downturn in the late 80s ... was the strangle hold
that sna/vtam & communication group had on the datacenter. this shows up
in the late '80s with senior disk engineer getting talk scheduled at
annual, internal, world-wide communication group conference ... and
opened the talk with statement that the communication group was going to
be responsible for the demise of the disk division. the issue was that
the communication group strangle hold on the datacenter was isolating it
from the emerging distributed computing environment.

users were getting fed-up with the limited bandwidth and capability
available for accessing data in the datacenter ... and as a result
there was lots of data fleeing the datacenter to more distributed
computing friendly platforms (resulting in big downturn in datacenter
mainframe disk sales & revenue).

the disk division had developed some number of products to address all
the issues regarding working in a distributed environment ... but the
communication group was able to block nearly all efforts ... since the
communication group had corporate strategic responsibility for
everything that crossed the datacenter walls ... *AND* the
communication group was staunchly protecting its terminal emulation
install base.

IBM and the Computer Revolution

Quadibloc <jsavard@ecn.ab.ca> writes:
So it made significant contributions, even if, for all its size and
dominance, it only played a limited role as a driver of innovation.

another view ... is that the big market driver for use of computers was
the commercial dataprocessing business ... which the company already had
big position in with its tab equipment. some number of companies were
computer companies ... attempting to create brand new business market
(much more motivation for these "computer" companies to be on leading
edge). in contrast ... IBM could be viewed as applying newer
technologies to its existing business (it already had significant
revenue flow so there was much less motivation to create something brand
new).

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM and the Computer Revolution

Anne & Lynn Wheeler <lynn@garlic.com> writes:
another view ... is that the big market driver for use of computers was
the commercial dataprocessing business ... which the company already had
big position in with its tab equipment. some number of companies were
computer companies ... attempting to create brand new business market
(much more motivation for these "computer" companies to be on leading
edge). in contrast ... IBM could be viewed as applying newer
technologies to its existing business (it already had significant
revenue flow so there was much less motivation to create something brand
new).

one could even claim that the adoption of computers for commercial (tab)
dataprocessing went a whole lot better than how the communication group
handled distributed computing moving in on its terminal emulation
install base
http://www.garlic.com/~lynn/2011c.html#21 If IBM Hadn't Bet the Company

--
virtualization experience starting Jan1968, online at home since Mar1970

A senior corporate executive had been the sponsor of the Kinston
supercomputing effort ... besides supposedly doing their own design,
there was also heavy funding for Steve's SSI. That executive retires
end of Oct91 which resulted in review of a number of efforts,
including Kingston. After the Kingston review, there was an effort
launched looking around the company for something to be used as
supercomputer and found cluster scaleup stuff (above referenced post
about Jan92 meeting in Ellison's conference room):
https://en.wikipedia.org/wiki/IBM_Scalable_POWERparallel

If IBM Hadn't Bet the Company

Joe Thompson <spam+@orion-com.com> writes:
Lower-end Unisys gear is mostly kit made by Dell to spec -- the older
stuff often has Dell splash screens and identifies itself as whatever
Dell model it really is internally. The higher-end (ES7000 and
ClearPath) are mostly Xeon, but Itanium is an option on a couple of
generations of each (not anything current though). -- Joe

370/195 was peak 10mips/sec for codes that operated in the pipeline
... however most branch instructions would "drain" the pipeline ...
most common code would only keep the pipeline half full and have thruput
of 5mips/sec. This was motivation for an internal effort for
multi-threaded 370/195 ... basically looked like two 370/195s
multiprocessor ... but only had single (shared) pipeline ... concept
would be both threads each would keep pipeline half full ... for a
totally full pipeline operating at peak/full 10mips/sec. This never got
announced ship. However, at least one of the people from the YKT cp67
"G" effort went to work on it ... mentioned in this email
http://www.garlic.com/~lynn/2011b.html#email800117in this post
http://www.garlic.com/~lynn/2011b.html#72 IBM Future System

After FS demise ... the mad rush to get products back into the
product pipeline ... 303x line in parallel with 370/xa (& 3081).

3031 & 3032 were essentially repackaged 158-3 & 168-3. 3033 started out
being 168 wiring diagram mapped to 20% faster chips. The chips also had
ten times the circuits ... but initially went unused (3033 20% faster
than 168-3) ... some last minute optimization leveraging onchip logic
... got it up to 50% faster than 168-3 (or 4.5mips/sec).

Most codes would run slightly faster at 5mips on 370/195 than they ran
onf 4.5mips 3033 ... or close to the same on 3083k (eventually got
around to doing 3083k basically 3081k with one of the processors removed
... in large part because ACP didn't have multiprocessor support)

To get "peak" 10mip 370/195 thruput ... would have to go to clone
processor or wait for 3090.

I've mentioned before that SJR had 370/195 running MVT for quite awhile
... and there was big batch queue (sometimes taking several weeks or
more than month for turn around). Disk group was running "air bearing"
simulation in support of floating heads (flying much closer to surface
resulting in much higher datarate) ... but even with priority
consideration turn around could still be a week or two. When bldg. 15
got 3033 for disk testing ... things were setup so "air bearing"
simulation could run in the background. "air bearing" was optimized for
195 ... an hr of 195 cpu could turn into nearer two hrs on 3033 ... but
elapsed time on 3033 was about the same as cpu time (instead of a couple
weeks).

If IBM Hadn't Bet the Company

hancock4 writes:
Is there any way to get a chart of the "pecking order" of modern Z
series machines? There seems to be a great number of models and sub-
models with a lot of overlap. It's not as simple as S/360-30, -40,
-50, 65, etc.

at least for awhile ... some of the sub designations was the number of
processors ... but that has gotten more complex with dynamic capacity
and being able to have extra processors turned on & off for peak loads
(along with how hardware capacity-based pricing). Then there are things
with dynamic capacity and software-based capacity pricing. older
discussion with some of the sub numbers (some of it discussion smp
scaleup as number of processors are added ... & "LSPR" ratios)
http://www.garlic.com/~lynn/2006l.html#41 One or two CPUs - the pros & cons

some of the specialized processors may not even be of the "Z"/360 kind
.... akin to "processor clusters" that I worked on in the mid-80s:
http://www.garlic.com/~lynn/2004m.html#17 mainframe and microprocessor

Huge <Huge@nowhere.much.invalid> writes:
Hmm, contrary to what the (so-called) Good Book says ("the truth shall
set you free") IME the truth gets you into trouble.

after ridiculing FS ... some huge amount of it not being practical
and/or was pure blowing smoke w/o any content behind it (vaporware) ...
and other parts possibly not even as good as stuff as I already had
running (along with reference somewhat to inmates in charge of the
institution ... the cult film playing down at central sq)
http://www.garlic.com/~lynn/2011c.html#9 If IBM Hadn't Bet the Company

i get blamed for online computer conferencing on the internal network
in the late 70s and early 80s. folklore is that when executive
committee was told about online computer conferencing (and the
internal network), 5of6 wanted to fire me.

RISCversus CISC

torbenm@diku.dk (Torben Ægidius Mogensen) writes:
As you hint, counting the number of instructions in an ISA is futile: If
you use a bit to select two alternative behaviours, is it one
parameterised instruction or two separate instructions?

the statement in the 70s about (801/)RISC was that it could be done in a
single chip. later in the 80s, (801/)RISC was instructions that could be
executed in single machine cycle. Over the decades, the definition of
RISC has been somewhat fluid ... especially as the number of circuits in
a chip has dramatically increased.

--
virtualization experience starting Jan1968, online at home since Mar1970

2-3 weeks ago ... there was news story that Japan police had found
evidence of fixing in sumo wrestling ... which had been raised in
FREAKONOMICS (published six yrs ago) ... i just got around to
reading SuperFreakonomics (on kindle).

--
virtualization experience starting Jan1968, online at home since Mar1970

the phenonama was also referred to as Tandem Memos ... since some of
the activity was kickedoff by some comments I had distributed after a
Friday afternoon visit to Jim Gray at Tandem (before Jim left research,
Jim would frequently attend the friday afterwork events that I would
have at various places in the san jose plant site area).

Some of it leaked outside the company and there was an article on the
phenonama in Nov81 Datamation (by then, I was under stricked orders not
to talk to the press).

One of the outcomes of the task forces was decision to provide official
corporate support (and control) for online computer conferences. A more
structured automated facility was created and used for the operations
(TOOLSRUN) ... and there were official sponsored discussion groups
created (with moderators).

Later, a similar program was created/adopted for BITNET (with subset of
the TOOLSRUN function ... called LISTSERV (since then LISTSERV function
has been ported to other platforms) ... misc. past posts mentioning
BITNET &/or LISTSERV
http://www.garlic.com/~lynn/subnetwork.html#bitnet

also, somewhat as result of all the events ... a researcher was paid to
sit in the back of my office for nine months taking notes on how I
communicated. they also went with me to meetings, got copies of all my
incoming & outgoing email and lots of all my instant messages. The
result was a research report, a Stanford PHD thesis (joint between
language and computer AI) and material for several papers and books.
misc. past posts mentioning computer mediated conversation
http://www.garlic.com/~lynn/subnetwork.html#cmc

corporate hdqtrs eventually had process that tracked amount of traffic
on all the internal links across the world ... and there was claim for
some months, I was responsible for 1/3rd of all internal network traffic
(on all links).

with the "official" online computer conferences ... there was periodic
jokes about a discussion being "wheeler'ized" ... there would be
hundreds of people contributing comments ... but half of the volume of
all comments were mine (i've since significantly mellowed).

although I had been doing online computer conferencing like things
before, nothing seemed to catch the attention & interest of so many in
the corporation as did the comments about the visit to Jim and Tandem
(while there were only a few hundreds actively participated, estimates
was that several tens of thousands were reading & following the
discussions).

for other Jim trivia/topic drift ... recent post about being asked by
Jim to help out one of his co-workers at Tandem who was being
blocked getting his PHD at Stanford on something similar to
what I had done nearly 15yrs earlier as an undergraudate:
http://www.garlic.com/~lynn/2011c.html#8 The first personal computer (PC)

despen writes:
One of the best was a COBO batch program that was taking 14 hours
to run. I changed some compiler options, submitted the job and got
back the output. The operator saw fit to write on the deck that
the job failed. It ran fine, it only took a few minutes to run.

COBOL batch program ... 450k+ statement ... ran every night on 40+
max configured mainframes (something like $30m per system) ... got 14%
improvement after a couple weeks of effort. The organization had large
department for years dedicated to the performance of this application
... but they had gotten used to only using a specific set of tools for
looking at performance.

part of the problems was that it was one of the applications that
was starting to push the overnight batch window

at science center in the early 70s ... we used a whole variety of
performance methodologies ... including some that eventually evolved
into things like "capacity planning" ... and this performance
organization had somewhat fallen into a rut only looking at performance
from single point-of-view.

one story was when he was head of lightweight fighter plane design at
the pentagon ... and his 1star general came into the area to find a
heated technical argument going on between him and a bunch of
lieutenants. the general fired him for not maintaining correct military
atmosphere.

another was about the forces behind the F15 attempting to get him thrown
in Leavenworth (even tho he had redid F15 design cutting weight nearly
in half).

They had gone to secretary of air force with claim that they knew he was
designing what was to become F16 ... which was unauthorized ... and he
had to be using enormous amounts of supercomputer time ... worth at
least tens of millions; since it was unauthorized it amounted to theft
of gov. property.

There was concerted effort to uncover evidence of this "theft" ... but
after several months of auditing all gov. supercomputers records ...
couldn't find any evidence of his use.

The air force had pretty much disowned him ... but the marines adopted
him and it was the marines that were at arlington, his effects are at
marine library at quantico ... and they have a shrine to him in the
library lobby. In the light of all that, it seems strange that the air
force would dedicate a hall to him.

when the lengthy spinney/time article appeared about gross pentagon
misspending, Boyd claimed that SECDEF knew that Boyd was behind the
article and had a directive that Boyd was banned from the pentagon.
supposedly there was also a new document classification ... "NO-SPIN"
... unclassifed but not to be given to spinney.

when I sponsored Boyd's briefings at IBM, he only charged me for his
out-pocket expenses.

If IBM Hadn't Bet the Company

"Joe Morris" <j.c.morris@verizon.net> writes:
Standard Army story: a subordinate may ask for clarification of orders, and
might even (respectfully) suggest something different, but there comes a
time when there is only one correct response.

"YES SIR, HOW HIGH, SIR?"

That time, of course, comes much quicker when the subordinate is at the
bottom of the chain of command.

IBM "Watson" computer and Jeopardy

cb@mer.df.lth.se (Christian Brunschen) writes:
The 'POWER' CPU architecture (supposedly short for 'Performance
Optimization With Enhanced RISC') actually came entirely from IBM, and was
in use in the RS/6000 line of workstations, all while Motorola were
developing their successor to their successful CISC 680x0 line of CPUs,
the RISC-based 88000.

Apple, meanwhile, were looking to move away from the 680x0 line of CPUs;
but the 88000 didn't turn out as successful as Motorola had hoped. I don't
know the details, but eventually Apple, IBM and Motorola banded together
to combine IBM's POWER architecture with Motorola's implementation
know-how and (I think) the 88000's buses etc, thus creating the 'PowerPC'
line of microprocessors, often abbreviated as 'PPC'.

had previously worked at motorola. when somerset was original started
(to do single-chip 801/risc ... starting with 601) ... he went over to
head up sumerset. later he left sommerset to be president of MIPs.

among other "simplifications" ... base 801/risc had no cache consistency
... which effectively made multiprocessors a difficult operation (John
would make comments about the heavy performance penalty paid by 370 for
multiprocessor cache consistency).

I recently mentioned there is 25th reunion for aix (pc/rt) coming up
the end of the month (and while lots of VRM showing up ... it didn't
look like any from interactive that did AIX were showing up). ROMP
mention (precursor to RIOS):
https://en.wikipedia.org/wiki/ROMP

ROMP (research/office products) was originally going to be used in
followon to displaywriter (using CP.r written in PL.8). When that was
canceled ... there was decision to do unix workstations. The PL.8 people
doing the VRM (in PL.8) ... and hiring the company that did PC/IX
(interactive) to do AIX ... implemented to the abstract virtual machine
interface (provided by VRM). A major justification for the VRM/AIX was
that it could be done faster than having interactive people learn the
low-level 801/ROMP characteristics. This was somewhat disproved when the
palo alto people were redirected from doing BSD port to 370 ... to doing
BSD port to PC/RT (native, w/o VRM) called AOS. A more jaundiced view
of VRM was that it gave the PL.8 programmers something to do.

In ha/cmp, one of the reasons for doing scaleup as clusters ... was
rios had no cache consistency for doing multiprocessor scaleup ... I had
worked on both multiprocessor implementations
http://www.garlic.com/~lynn/subtopic.html#smp

as well as cluster implementations previously ... recent references in
this thread about doing processor clusters the same time as working
with NSF on what was to be NSFNET backbone:
http://www.garlic.com/~lynn/2011c.html#6 Other early NSFNET backbone

The first personal computer (PC)

greenaum@yahoo.co.uk (greenaum) writes:
I've got the paper version. In colour! And I can stick my fingers and
bits of paper between pages as temporary cross-ref bookmarks. Actually
I've got the newer version with the unneccesary graphs and graphics
all over the place, which really don't add anything.

I'm not gonna bother with an e-book til they've got colour sorted out,
and a proper white background. Still seems like a remote second-best
to the alternative. And the feel of browsing a real bookshelf is
different to a directory listing. Owning real books seems like more of
an acheivement, somehow.

1) don't bother to ask the customer about something you currently
don't support or

2) CPD support at the time was only for 56kbits, people may have
looked for large numbers of 56kbit links (with CPD supported hardware)
on smooth curve approaching aggregate T1 speeds. Lack of knowledge
about the business failed to alert them that a T1 was priced at about
six 56'ers (which is now down around 4 or less) ... making wholly
unlikely that any customer would go over three.

Also, if you can't fix-it, "feature it" ... CPD has made much of
fat-link capability (i.e. small aggregations of parallel 56kbits).

Somebody recently inquired regarding the "network" forum that I
mentioned from Raleigh. Announcement file is attached. As part of the
distribution of the announcement information in the spring of '85 was
the inclusion of definitions (the distribution went out the friday
before the '85 VMITE, I remember the date because I had to be in Japan
the following week on business for the HSDT project ... and missed
VMITE).

The HSDT project intersected the CPD "definition" the following year
when the '86 CPD fall-plan forecast an aggregate total of 2-3 T1s
installed (at customers) by early 90s. In the fall of '86, HSDT had more
T1s installed than were forecast for the whole customer community (also
at the time a superficial customer survey of IBM mainframe customers
showed an installed base on the order of 200 T1s, a large percentage
connected to IBM mainframes via NSC HYPERchannel).

A new IBM Internal Use Only conference service is available for anyone
interested in NETWORK problems, design, etc. It works like the IBMPC and
IBMVM conferences in that the discussions on any topic are each
contained in an ordinary CMS file which you may get, create, or append
to, as desired. Anyone interested in NETWORK problems, design,
performance measurements, engineering, etc are encouraged to use the
conference to communicate with others working in the same field.

More details are included in NETWORK RULES. This, and a simple
interface called NETWORK EXEC, is available from yyyy yyyyy (xxxxxx at
RALVM16). If possible use the TOOLS EXEC to request the NETWORK package
from PROAIDS at RALVM16 as follows:

I only have a few minutes this morning to read your note as I am preparing
to go to Boston for two days. But what I read is a reasonable argument for
pacing based on past experience, logical deduction and a little intuition.
You might be interested in a slightly different arguament too. We in Manassas
have been working with signal processing in a distributed network for years,
since 1977 or so. In 1982 we developed a local area network for a submarine
that is being used today on board U S submarines. We started with the
observation that most of our messages are presented to the network at regular
intervals, periodic traffic. In cooperation with Carnegie Mellon Univ.,
we built on mathematical proofs which provide a guarantee of message
response time for any given set of messages as long as we pace the packets.
I know we approached the problem of network resource sharing from a different
starting point, but I think mathematically guaranteed response times along
with the ability to compute the adaptive parameters for pacing might be
a welcome addition to your paper.

If you think so too, we can provide as much of our experience as you
might like. By the way, this type of mathematical approach to determining
what "proper" scheduling means is being used by the DoD sponsored Software
Engineering Institute for Real Time Ada designs and has been incorporated
into the new Future Bus standard backplane bus. We can put you in touch
with these folks and/or the CMU people too, if you are interested.

same month that "slow-start" was presented at IETF meeting, ACM
SIGCOMM had paper on why "slow-start" was non-stable in in high
letency, heavily loaded network. I've conjectured in the past that
"slow-start" for congestion avoidence was done for class of machines
that had very poor time services. A simple rate-based pacing
implementation adjusts time interval/delay between packet transmission
(requiring system time services).

part of HSDT came up with 3-tier network architecture ... included
it in response to large federal campus RFI and out pitching it
to corporate executives. This was at the time when communication
group was attempting to stuff client/server genie back into
the bottle ... defencing its terminal emulation install base
http://www.garlic.com/~lynn/subnetwork.html#emulation

If IBM Hadn't Bet the Company

"Charlie Gibbs" <cgibbs@kltpzyxm.invalid> writes:
I was once called into a shop where the customer estimated
that their big inventory program would take 50 hours to run.
I seriously pissed off the CS weenie who wrote the thing when
I had the temerity to replace some complicated 3-level PERFORM
statements with a couple of nested loops (complete with that
profane word GOTO), even after pointing out the assembly-code
listings showing just how much overhead I was getting rid of.
This genius had also declared all of the program's many subscripts
as COMP-3 (packed decimal) rather than COMP-4 (binary); making this
one change knocked 30% off the execution time.

the 40+ * $30+M (>$1.2B) were pretty much sized for the
all-night/overnight run of the 450+k statement cobol program ... they
also pointed out that they were constantly bringing in the latest
systems (no system was older than 18months). 14% improvement (a couple
weeks work) was around $200m savings (that or they could process nearly
100m more accounts w/o needing additional hardware). At the time they
were looking at moving/converting a portfolio with something like 65m
accounts from somebody else.

I've mentioned before that I started out and offered to do it for 10%
of the first yr's savings (but when I was done ... they had no
recollection of that offer).

--
virtualization experience starting Jan1968, online at home since Mar1970

totally unrelated performance work in the early/mid-90s ... was
"routes" from major airline res system (ACP/TPF) ... which account for
about 25% of the processing. They had a list of ten "impossible"
things they wanted to do ... including significant scaleup
... theoritically be able to handle every reservation for every flt in
the world. their implementation paradigm being used was effectively
unchanged from the 60s.

I looked at it and decided technology had changed over the previous 30
or 40 yrs ... that I could take a completely different approach. I
came back within two months with demo of the new implementation. The
basic process ran 100 times faster ... but since I added some new
features ... one was that typical operations required three different
manual searches/queries ... which I had collapsed into one. I could
only do about ten times as many of the new "routes" operations ... but
each one did much more work (and eliminated two manual operations by
reservation agents).

The initial pass I only got about 20 times the performance ... and
then went back and carefully optimized for cache characteristics of
the machine it was being demo'ed on ... and got another five times
(for 100 times total).

One of the "impossible" things was production system tended only be able
to do two or three connections ... more than that required manual
operation by agent. I claimed to be able to find route/flts from
anyplace to anyplace else (they had provided me with copy of the
complete OAG ... all commercial scheduled flts and all airports with
commercial scheduled flts). As part of the demo, they would ask things
like route/flts from some obtuse airport in Kansas to some equally
obtuse airport in Georgia (and probably not the Georgia you are thinking
of).

They ran into organization roadblock ... one of the reasons that many
of the processes they wanted to do were "impossible" was because there
was nearly 1000 people doing manual support operations. Part of the
new paradigm eliminated all the those manual support operations (in
theory whole thing could be done now with less than 40 people). It
turns out that there were a lot of high level executives would be
effected by this ... things dragged on for nearly a year ... and they
eventually said that they hadn't actually wanted me to do anything
... they just wanted to tell the airline board that I was consulting
on the problem.

The first personal computer (PC)

Roland Hutchinson <my.spamtrap@verizon.net> writes:
I believe there are places where billboards have at one time or another
been eliminated from particular highways. If one had data on sales
surrounding the time when that happened, one might be able to draw some
conclusions.

"lady bird" had a lot of billboards eliminated (along interstates) when
her husband was president ... she also promoted other federal
"beautification" projects ... some involved overhead transmission lines.

the directive/mandate came down from "lady bird" that the transmission
lines had to be buried underground (going up the hill to behind where
the camera taking the picture is located). all the engineers claimed
there wasn't technology available to put such transmission lines
underground going up that slope. they were told it had to be done
anyway. a couple years later when there was large electrical short and
big fire ... the engineers that said the technology wouldn't work
... were the ones blamed (not lady bird).

Patrick Scheible <kkt@zipcon.net> writes:
Sometimes, however depending on the situation you may not get approval
for a project if you give a realistic estimate upfront. The job might
not get done at all, or might go to your rival (inside or outside the
organization) who gives an optimistic estimate.

some of this has gotten institutionalized in various parts of the gov.
... discussed in some detail in this reference to pentagon:

way up at the top is large number of legacy computer "re-engineering"
efforts that have gone on in every federal organization ... akin to
reference to billions were spent in the financial industry in the 90s on
failed straight-through processing re-engineering ... reference in this
recent post
http://www.garlic.com/~lynn/2011c.html#35 If IBM Hadn't Bet the Company

Charles Richmond <frizzle@tx.rr.com> writes:
But there *ought* to be some punishment for those politicians who put
out a "bond issue" (read "tax") to build a new convention center. I
knew when I voted "NO" that the damn convention center would cost
twice as much as they quoted, by the time it was built... and it
did!!!

Then there is San Jose's light rail. The original justification was
ridership volume which was dependent on elapsed time commute for
riders in south san jose to employers in mid-peninsula ... which was
dependent on all light rail crossings being "off-grade" (i.e. no
intersections with auto street traffic).

somewhere along the way somebody decided to save money and eliminate
some number of the off-grade crossings, which blew the elapsed time
commute numbers which blew the ridership volume that justified having
the light rail in the first place.

then there was the "new" 101 from cottle rd (south san jose) to
gilroy. coyote valley association campaigned that the new 101 should
drop from six lanes to four lanes through coyote valley
(approx. bernal rd in san jose to cochran in morgan hill). That
resulted in adding 30 minutes to commute in the morning for tens of
thousands of commuters going north at the cochran choke point and
another 30 minutes to commute in the evening at the bernal choke point
going south ... possibly cost 10,000 people hrs/day (in addition to
extra auto pollution); 50k people hrs/week, 2.5m people hrs/year. then
much later ... the incremental construction cost to add the extra two
lanes (compared to having just done the full six lanes as part of the
original construction in the first place).

who should get the bill for that 2.5m people hrs/year (even at $10/hr
... that is still $25m/yr).

the reverse somewhat happened during the financial crisis. unregulated
loan originators (who nominally was very small part of the business
because they had very limited source of funds for lending, which was
possibly why nobody got around to regulating them), found that they
could securitize the loans and pay the rating agencies for triple-A
ratings. The triple-A ratings gave them access to nearly unlimited
source of funds (estimate that during the mess, something like $27T in
triple-A rated toxic CDO transactions were done).
Evil Wall Street Exports Boomed With 'Fools' Born to Buy Debt
http://www.bloomberg.com/apps/news?pid=newsarchive&refer=home&sid=a0jln3.CSS6c

Now the unregualted loan originators could unload every loan they
wrote, regardless of loan quality and/or borrowers qualifications.
Speculators found the no-documentation, no-down, 1% interest only
payment ARMs a gold-mine ... possibly 2000% ROI in areas with 20-30%
inflation. The speculation created impression of enormous more demand
than actually existed, the speculation demand motivated enormous over
building, the over building required municipalities to build out a
whole lot of new services for all the new housing projects (as well as
commercial developers doing a lot of new projects like strip malls for
the new housing developments).

For all the new services, the municipalities issued lots of new bonds
... anticipated that they would be covered by revenue when all the new
houses were sold. Then things crashed ... and they aren't getting the
revenue to cover all those new bonds. The crash spreads out thru the
economy ... strip malls are going unsold, commercial developers are
defaulting and those defaults starting to take down local banks.

An early secondary side-effect was that the bond market froze ... when
investors found out that the rating agencies were selling triple-A
ratings and started wondering whether they could trust any ratings
from the rating agencies. The muni-bond market was restarted when
Buffett stepped in and started offering muni-bond insurance (this was
before the deflating bubble had perculated into hitting municipality
revenue and ability to pay on all those bonds) ... past post
http://www.garlic.com/~lynn/2008j.html#20 dollar coins

Two nights ago, 60mins-on-CNBC had program on ponzi schemes and an
update on financial crisis. At the end of 2008, there was estimate
that the four largest too-big-to-fail financial institutions had $5.2T
in triple-A rated toxic CDOs being carried off-balance (courtesy of
their unregulated investment banking arms and repeal of Glass-Steagall).
The 60mins report was that these too-big-to-fail institutions helped
keep the bubble going by continue to buy/trade each others triple-A
rated toxic CDOs (while having the triple-A rated toxic
CDOs would severely damage the institution, the investment bankers
were getting big bonuses, fees, and commissions as long as they could
keep the trades going; they pretty much all knew that the triple-A
rated toxic CDOs weren't worth having ... but as long was the
music played ... they could continue to rake in the money off the
trades, churning each others portfolios).

Later there were some "regular" selling involving several tens of
billions these toxic CDOs and they were going at 22cents on the
dollar (if the four largest too-big-to-fail institutions had been
required to bring the $5.2T back onto the books, they would have been
declared insolvent and have to be liquidated). Recently after the
Federal Reserve was forced to divulge some of the stuff it has been
doing, buried in the numbers was reference to the FED has been buying
up these toxic CDOs for 98cents on the dollar (as part of its
propping up the too-big-to-fail institutions).

--
virtualization experience starting Jan1968, online at home since Mar1970

The first personal computer (PC)

Peter Brooks <peter.h.m.brooks@gmail.com> writes:
I think it'd be a lot quicker, and easier, to work out the
distribution of clever people in any particular region. There are far
fewer of them, for a start, and they tend to be found in more easily
distinguished clusters.

On the other hand, I suppose you could see how well a particular group
is represented in the Darwin Awards - they mostly seem to be male, for
a start, which should please some people.

after watching large number of people during airline boarding by rows
... not being able to figure out if one number is larger than another
number ... i considered that moving to boarding by sections was attempt
to eliminate descrimination of the mathematically challenged. however,
in discussing this in other fora ... there are claims that there are
still significant of people then still can't get it right when boarding
by sections (even when printed on their boarding pass).

... possibly a lot of the people really don't have much connection
between various sections of their brain ... they see some number of
other people moving and they move too ... more akin to sheep herds
https://en.wikipedia.org/wiki/Sheep

--
virtualization experience starting Jan1968, online at home since Mar1970

If IBM Hadn't Bet the Company

Anne & Lynn Wheeler <lynn@garlic.com> writes:
Then there is San Jose's light rail. The original justification was
ridership volume which was dependent on elapsed time commute for riders
in south san jose to employers in mid-peninsula ... which was dependent
on all light rail crossings being "off-grade" (i.e. no intersections
with auto street traffic).

somewhere along the way somebody decided to save money and eliminate
some number of the off-grade crossings, which blew the elapsed time
commute numbers which blew the ridership volume that justified having
the light rail in the first place.

[2] A little voice in the back of my mind keeps saying,
"But Microsoft isn't stupid!"

at m'soft developers conference (MDC) spring '96 held at (sanfran)
Moscone convention center ... some number of m'softers were saying that
year was major turning point.

up until then, people would get the latest release (every year or more
often) because it would have new function that they needed. turning
point approx '96 was that 95% of the people had 95% percent of what they
used. it was time to switch to new marketing campaign similar to new
cars in the 60s ... somehow convince people to buy a new one whether
they needed it or not.

even tho the software was "purchased" ... the business had been
similar to IBM's hardware lease business (prior to early 70s when most
machines were converted to purchase) ... aka dependable regular
revenue stream (from lease business). When people start keeping their
cars for 5-10 yrs ... it has big downside on the annual revenue stream
(compared to everybody getting a new one every year).

the industry has been especially accused of maintaining a broken PC
security paradigm in order to keep up that part of the regular annual
revenue stream.

--
virtualization experience starting Jan1968, online at home since Mar1970

Ahem A Rivet's Shot <steveo@eircom.net> writes:
Neither the xmas exec nor the Morris worm had anything to do with
MUAs that execute attachments without user intervention. There have been
exploitable weaknesses in every computer system made, this particular one
was AFAICT invented by Microsoft.

'96 MDC in moscone ... had all these banners that proclaimed internet
support ... but the repeated theme/phrase almost everywhere was "protect
your investment" ... this was all the developers that did various forms
of basic programming ... including stuff that could be added to all
sorts of email and office files ... that would automatically execute for
various kinds of special effects ... but resulted in enormous
vulnerabilities when moved to the internet. other recent mention
of '96 MDC in moscone
http://www.garlic.com/~lynn/2011c.html#49 Abhor, Retch, Ignite?

I talked to mitre people at the time about seeing if they could get the
people doing the submissions to add slightly more structured descriptive
information (that could be used in categorization). the reply was that
the were lucky enough to get people to write anything ... and trying to
apply rules would probably backfire. I was working on adding to
my merged security glossary and taxonomy ... reference here
http://www.garlic.com/~lynn/index.html#glosnote

If IBM Hadn't Bet the Company

Michael Wojcik <mwojcik@newsguy.com> writes:
FS, of course, was another story entirely - much too ambitious,
completely oversold within the company, etc - and if Lynn hadn't
intervened it could have been a real disaster for IBM. (I'm assuming
Lynn's version of events is more or less accurate, but I don't have
any reason to believe otherwise.) As it was, FS was canceled and some
of the tech did find commercial success in the System/3x and later
AS/400, but IBM didn't try to shoehorn everything into it.

I may have ridiculed them ... but I don't think that my voice was that
instrumental in the FS demise. Thousands of people were involved in FS
... and whole sections of the architecture was vaporware ... which had
to eventually be obvious to large numbers.
http://www.garlic.com/~lynn/submain.html#futuresys

a lot of one-level store was copied from (failed) tss/360 effort
... as well as other activities going on around the industry ... for
instance Multics .. on the flr above the cambridge science center at
545 tech sq. I mention first doing paged mapped support for cp67/cms
(during FS period) ... trying to avoid the shortcomings that I
observed in tss/360
http://www.garlic.com/~lynn/2011c.html#12 If IBM Hadn't Bet the Company

some amount of the s/38 was the high level abstraction and application
simplification ... moving into business areas with much lower skills
and resources. One of the early s/38 pitches was that the whole s/38
activity at a company could be handled by a single person. for
low-end, the issues regarding skills/availability was more of a market
inhibitor than the cost of the hardware (with possibly factor of 30
times thruput hit)

however, in large established operation with critical depandancies on
dataprocessing ... giving up 30 times performance ... and max'ing out
at 370/145 thruput (using 370/195 speed hardware), would have been
enormous impact ... reference to the FS performance study by the
Houston Science Center
http://www.garlic.com/~lynn/2011c.html#17 If IBM Hadn't Bet the Company

FS architecture was divided into approx. dozen sections ... and at the
time, my wife reported to person that owned one of the sections. She
thot FS was fantastic because got to consider every academia blue sky
idea that had ever been thot of. However, she also observed that there
were whole sections of FS architecture that was purely blue sky
... with no actual content (vaporware). The enormous amount of blowing
smoke, content free and vaporware turned into a polite description of
being "too ambitious". "Too ambitious" is possibly also polite way of
defining high level, complex hardware that results in taking a factor
of 30 times performance hit.

I've tripped across comments that the FS compartmentalizing was
possibly done for security reasons ... industrial espionage of any
specific FS component still wouldn't allow competition to build a
product. A more jaundiced view was that the compartmentalization
prevented people from realizing how bad things actually were. Some
humorous references was that if vendor somehow actually got the
specifications for all the different parts (and didn't die laughing),
any attempt to build a competitive implementation would have destroyed
them.

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM and the Computer Revolution

Michael Wojcik <mwojcik@newsguy.com> writes:
Agreed. Also the Palo Alto and Cambridge (Massachusetts) Scientific
Centers, of which Lynn has often written, and so on.

I agree that IBM did not invest so heavily in research in the '50s and
'60s as it did in later decades; and it has never been an organization
to introduce innovative products simply for the purpose of innovation.
IBM is first and foremost a business.

there were observation that part of mad rush to get products back into
the pipeline (after demise of FS) ... was to pull off all resources on
"advanced technology" efforts to heads-down effort on getting out
current development as fast as possible. that was excuse given for there
not being a corporate advanced technology conference between the one in
POK where 801/RISC was presented (and we presented 16-way smp) ... and
the one I held the spring of '82.

It was also in this period that Phili science center (where lots of APL
stuff had been done) and Houston science center were shutdown (Cambridge
and Palo Alto continued to survive for a period).

--
virtualization experience starting Jan1968, online at home since Mar1970

the big difference between the the "Sequent" executive retirement and
the Oct91 executive retirement ... was the Sequent scenario was possibly
somewhat NIH for the rest of the company ... and the Oct91 executive
retirement, and the followup reviews possibly turned into something of
"Emperor's New Clothes" moment

If IBM Hadn't Bet the Company

Anne & Lynn Wheeler <lynn@garlic.com> writes:
FS architecture was divided into approx. dozen sections ... and at the
time, my wife reported to person that owned one of the sections. She
thot FS was fantastic because got to consider every academia blue sky
idea that had ever been thot of. However, she also observed that there
were whole sections of FS architecture that was purely blue sky ... with
no actual content (vaporware). The enormous amount of blowing smoke,
content free and vaporware turned into a polite description of being
"too ambitious". "Too ambitious" is possibly also polite way of defining
high level, complex hardware that results in taking a factor of 30 times
performance hit.

In the 757/767/777 there were stories about Boeing "outsourcing" large
amount of the work to their suppliers ... cutting down some of the big
employee boom/bust cycles in Seattle i.e. large amount of work that
had been done in-house would instead be performed by supplier
employees. Since large proportion were in the US ... it didn't get the
same publicity (i.e. publicity isn't really about outsourcing ... but
whether the jobs are in the US or not ... pretty much independent of
whether the jobs have been outsourced or are non-US employees).

Somewhat more mainframe related and foreign competition. Major
(CAD/CAM) design tool used was re-logo'ed by IBM from foreign
company. During the OCO-wars, one of the arguments was customers
needed source for business critical components where customer was
willing to devote the resources for more timely support than might be
available with the normal channels.

In the CAD/CAM relogo ... the IBM support group just acted as problem
clearing house (and at the time, also not having access to the source)
for the original vendor. At the time there was also speculation about
international issues since the CAD/CAM vendor was viewed as having
close ties with Boeing's major competitor.

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Future System

There were these skirmish arguments between IMS and System/R groups
... IMS saying that relational required twice the disk space (for the
implicit index) and large number of additional disk I/Os (in
processing index). System/R response was that IMS exposure of record
pointers as part of data schema required significant application
programmer and administrative effort. misc. past system/r posts:
http://www.garlic.com/~lynn/submain.html#systemr

Going into the 80s, disk price/bit came down (significantly mitigating
the disk space issue); there was significant increase in system real
storage (mitigating index disk i/o), allowing large amount of index to
be cached; skilled people costs idn't keep pace with the demand and
their costs went up. All of the factors started to tip the balance
towards relational. However, there is still significant large pockets
of IMS use ... especially in the financial industry; some of it is
purely legacy ... but other is that there are still some things (like
large ATM networks) that haven't tipped from IMS to relational.

S/38 inherited some amount of FS ... and there were early claims that
S/38 installations could get by with a single person; a dataprocessing
manager ... and not need application programmers or other staff.

Note that the motivation for FS was clone controllers ... and other
objectives may have been established once FS was going .... by an
executive that was part of FS

from above:
IBM tried to react by launching a major project called the 'Future
System' (FS) in the early 1970's. The idea was to get so far ahead
that the competition would never be able to keep up, and to have such
a high level of integration that it would be impossible for
competitors to follow a compatible niche strategy. However, the
project failed because the objectives were too ambitious for the
available technology. Many of the ideas that were developed were
nevertheless adapted for later generations. Once IBM had acknowledged
this failure, it launched its 'box strategy', which called for
competitiveness with all the different types of compatible
sub-systems. But this proved to be difficult because of IBM's cost
structure and its R&D spending, and the strategy only resulted in a
partial narrowing of the price gap between IBM and its rivals.

... snip ...

One might claim that the extreme baroque nature of the PU4/PU5
(ncp/vtam) interface (under the guise of SNA) was attempt to meet the
original FS motivation/objective (and not the reduced people effort
objective ... since it significantly drove up effort).

Trivia drift ... I worked on clone controller project as undergraduate
in the 60s ... later four of us were written up being responsible for
(some part of the) clone controller business.

RISCversus CISC

nmm1 writes:
That is certainly possible, in which case it would mean that my
estimate of 1% is too high - but it would then move the argument
to the other aspect. If the reason that they use such kludges is
because multiple precision integer arithmetic is too expensive,
that is an argument for improving it, as against not improving it
because it isn't used!

But there is still SSH and a fair number of other such protocols,
which are heavily used by people who absorb bandwidth like sponges.

from long ago ... and far away ... we were brought in to consult with
small client/server startup that wanted to do payment transactions on
their server ... they had also invented this stuff they called SSL they
wanted to use; the result is now frequently called "electronic
commerce".

somewhat as a result, we were asked to get involved in some number of
other (payment related) efforts. One was by the electronic payment
associations in combination with several technology vendors. They
published a specification that called for a large number of big number
operations for the whole operation (not just for key exchange). The
standard library for performing such operations was called "BSAFE" ...
and standard implementation did 16bit operations. I immediately did a
profile of the number of operations called for ... and got a friend who
had modified BSAFE to use 32bit operations (ran four times faster) to
benchmark on a number of different platforms.

I then presented the numbers to the committee (payment & technology
reps) ... the members claimed the numbers were 100 times to slow (if
they had ever done any actual operations, they should have claimed it
was four times too fast). Six months later, when they had running
prototype ... it turns out the profile numbers were within a couple
percent of actual (by then the standard BSAFE library distribution
included the 32bit support).

Note that possibly one of the reasons for the 100 times claim ... was
that it was, in fact, an increase of 100 times over computational load
for doing an existing payment transaction (and for no actual effective
improvement over what was being provided by SSL). misc. past posts
mentioning their enormous 100times computational (as well as 100times
payload size) bloat
http://www.garlic.com/~lynn/subpubkey.html#bloat

note some payment/security chipcards had 1024-bit math circuits/hardware
added ... to try and address the enormous elapsed time issue at
point-of-sale ... but that dramatically increased number of circuits in
such chips (size of chips & reduced number of chips/wafer), as well as
power draw per unit time (aka total power to do the operations was still
about the same, just compressed into shorter period ... which was still
significant).

a little later, I was approached by some in the transit industry if I
could do design that had roughly equivalent characteristics but w/o the
enormous power, elapsed time, chip size, etc penalty (could be done in
small subsecond for transit turnstyle ... using power from
contactless/RF operation ... and small inexpensive chip).

--
virtualization experience starting Jan1968, online at home since Mar1970

guy running TPM effort was in front row ... so I quiped that it was nice
to see TPM was starting to look more & more like my chip ... and he
quiped back that I didn't have a committee of 200 people helping me with
the design.

--
virtualization experience starting Jan1968, online at home since Mar1970

By the early 80s there was growing number of stories about the tightly
integrated, highly complex, baroque FS-philosophy SNA implementation
(w/o "clean" interfaces). simple example was large environment
installing slightly different device at remote location AND ... 1)
required a new microcode load in the remote controller, 2) new NCP
version at the datacenter, 3) new VTAM version in the host, and 4) new
MVS version. This required simultaneous, coordinated, upgrade of all
components across the whole infrastructure (typically over long
weekend) and any glitch in any part of the process frequently required
reverting/rolling back the whole infrastructure.

These horror stories increased during the 80s ... especially for large
customers with multiple systems per datacenter and multiple large
datacenters ... where there would have to be a simultaneous,
coordinated upgrade of the whole infrastructure (with glitch of any
one piece may have to revert the whole environment).

If FS had ever proven to be "deliverable" ... it would have implied
abandoning the large growing customers in favor of the s/38 class
customers who were having trouble getting support staff (because of
the growing impossibility of having coordinated, simultaneous upgrades
across large environment).
http://www.garlic.com/~lynn/submain.html#futuresys

The internal network saw this in the 70s & 80s with JES2 having
effectively intermixed networking fields with other job control fields
(no clean separated operation). Slightly different versions of JES2
attempting to communicate could result in one or both the systems
failing. Internal network was larger than arpanet/internet from just
about the beginning until possibly late '85 or early '86 and primarily
VM RSCS/VNET. RSCS/VNET had addressed the clean separation and as a
result JES2 systems were pretty much restricted to boundary nodes
... and a whole library of special RSCS/VNET drivers were created for
talking to JES2 systems ... which would convert all the necessary JES2
fields to the format required by the particular JES2 system at the
other end of the link. misc. past posts mentioning hasp/jes2
http://www.garlic.com/~lynn/submain.html#hasp

There is a somewhat notorious folklore about a ("upgraded") JES2
system in the San Jose plant site causing MVS system crashes in
Hursley ... and what was worse, they blamed RSCS/VNET for having not
been correctly configured to prevent the (Hursley) MVS
crashes. misc. past posts mentioning internal network
http://www.garlic.com/~lynn/subnetwork.html#internalnet

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 100: System 360 From Computers to Computer Systems

Allodoxaphobia <knock_yourself_out@example.net> writes:
There are a number of anecdotal stories about companies taking the
covers off and sending them to a paint shop to achieve their own
corporate color scheme.

the science center had five 2314 8drive strings and one 2314 5drive
string ... connected to the 360/67 (running cp67) ... and the IBM CE
painted each a different color ... to help with tracking which one was
which (for things like mount requests).

z/OS 1.13 preview

ps2os2@YAHOO.COM (Ed Gould) writes:
I cannot remember where my first run in with VSE was. It was a LONG
time ago. Probably im the mid 70's (???). I think it was in St Louis
at an IBM school there.We were trying to set up a 4331 for our New
York Office. It was either there or out in the LA IBM office. All I
can remember is that I did not like it in general. It was not
consistant on how it handled "things" (control cards and the like) its
been so long that the abiortion must have reconcieved. I dio remember
thinking IBM was out of their gourd for propegatting it and it shouuld
have been put on suicide watch (I would have helped pull ther
trigger).

high-end did 303x ... 3031 & 3032 were repackaged 158 & 168
... and 3033 started out with 168 wiring diagram map to chips that
were 20% faster. chips also had ten times as many circuits
... initially mostly unused ... but during the product development
some redesign to use the additional circuits got 3033 up to 1.5 times
168 (instead of 1.2 times). In parallel with 303x ... things started
on "XA" ... for awhile known as "811" ... which eventually resulted in
3081. some discussion of both FS & 3081:
http://www.jfsowa.com/computer/memo125.htm

and with the failure of FS, mid-range started work on "E" architecutre
... and in 79 came out with 43xx machines (followon to 138 & 148) that
supported both vanilla 370 and "E" (somewhat akin to 3081 with 370 &
"XA" modes). misc. past 43xx email ... starting in jan79 doing
benchmarks on engineering 4341s:
http://www.garlic.com/~lynn/lhwemail.html#43xx

some of the benchmarks were being done on the engineering 4341 in the
disk product test labs for the endicott performance test group ... since
i seemed to have better access to (endicott) 4341 than they did.

"E" architecture was somewhat akin to initial VS2 (SVS) with much of the
single virtual address space moved into microcode/hardware.

the above mentions 4341 supported 3370 ... which was the mid-range disk
... and was FBA. There was no mid-range CKD at the time ... which sort
of left MVS out of the big explosion in the mid-range market ... could
upgrade 370 and continue to use existing legacy DASD ... but was
difficult to see MVS on all the 43xx that were starting to proliferate
all over corporations in departmental conference rooms and supply rooms.
Eventually 3375 was produced which was CKD emulated on 3370 FBA ... to
address the lack of MVS support for FBA. misc. past posts mentioning
FBA & CKD
http://www.garlic.com/~lynn/submain.html#dasd

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM and the Computer Revolution

Ahem A Rivet's Shot <steveo@eircom.net> writes:
and indeed when used with computers "Carriage Return" and "Line
Feed" have clear physical meanings on things like teletypes and there are
various ways in which they are useful independently.

and since print mechanism wasn't interlocked ... there was science of
transmitting the correct number of "idle" characters during "carriage
return" ... to allow carriage to have actually reached return position
before transmitting additional (print) characters.

IBM Future System

As per above, my wife reported to owner (previously had been head of
the cp40 group at the cambridge science center) of one of the FS
"sections" ... and at some point started reviewing lots of the other
specifications ... which was the source of her comments that whole
sections were content free (vaporware).

I was told folklore that 811 came from nov78 date on some of the
documents.

i had a file drawer full of candy-striped 811 documents ... which
required special security handling. I've suspected that the list of
candy-striped document owners was subject of industrial espionage. At
one point I got a call from a recruiter about assistant to president
of silicon-valley subsidiary of a foreign company. During the
interview there were veiled references to new product documents. I
took the opportunity to say that I had recently submitted a "speak-up"
about some questionable business practices at a company unit,
suggesting some specific examples needed adding to the employee
conduct manual.

later during gov. prosecution of the foreign company for illegal
activity in the US, I had a 3hr debriefing by an FBI agent regarding
who said what during that interview.

from ibm jargon:
candy-striped - adj. Registered IBM Confidential (q.v.). Refers to the
Red and White diagonal markings on the covers of such documents. Also
used as a verb: Those figures have been candy-striped.

Registered IBM Confidential - adj. The highest level of confidential
information. Printed copies are numbered, and a record is kept of
everyone who sees the document. This level of information may not
usually be held on computer systems, which makes preparation of such
documents a little tricky. It is said that RIC designates information
which is a) technically useless, but whose perceived value increases
with the level of management observing it; or b) is useful, but which
is now inaccessible because everyone is afraid to have custody of the
documents. candy-striped

--
virtualization experience starting Jan1968, online at home since Mar1970

43xx saw similar volumes in the small number of unit sales ... a
difference was that there were some number of multi-hundred 43xx orders
by large corporations ... for the leading edge of "distributed
computing". internally they were converting departmental conference
rooms into 43xx rooms ... resulted in conference room scheduling crisis.

there was some expectation that the 4331/4341 follow-on (4361/4381)
would see similar large continued growth ... but by the mid-80s ... the
VAX sale numbers show that mid-range market was moving (to workstations
and large PCs).

IBM and the Computer Revolution

"Rod Speed" <rod.speed.aaa@gmail.com> writes:
I doubt it. Porn isnt that price sensitive a market and
wasnt that big a percentage of the market either.

It was more likely to be stuff as basic as the running time with movie
rental.

the counter is that both video and internet got majority of its early
uptake from adult stuff.

competition significantly brought down price of VHS machines (especially
compared to betamax machines) ... so part of tape (rental & sales)
market was responding to the largest segment with specific kind of
machines (part of betamax trying to control the video market)

in the late 90s, a website outsourcer claimed that it had ten e-commerce
websites that all had higher hits per month that the number #1 listed
site in popular measurements ... and they were all adult sites (who
weren't interested in being part of popular number #1 listings since
they were doing just fine w/o any such publicity).

there was also a footnote that they had several games&software
e-commerce websites where credit card fraud approached 50% while adult
content websites had near zero fraud.

--
virtualization experience starting Jan1968, online at home since Mar1970

4331 would be vse and/or vm. 4341 was big enuf to run mvs ... but mvs
has never came out with FBA support ... and the only mid-range disk was
3370 (a FBA device). it would be possible to upgrade existing 370
processor (running) MVS to 4341 ... retaining any existing CKD disks.
In attempt to allow some play for MVS in the big explosion in mid-range
... eventually 3375 was made available; CKD simulated on FBA 3370, in
effect giving up that MVS would come out with FBA support.

with regard to big profusion of distributed vm/43xx (4331 & 4341)
... after 3375 became available, there theoritically was some MVS
opportunity ... except MVS required a significant large amount of skills
& resources for its care & feeding.

recent post in ibm-main thread about "XA" was the high-end architecture
extension to 370 (3081) and "E" was the mid-range architecture extension
to 370 ((4331 & 4341 ... i.e. 4331 was code-named "E3" and 4341 was
code-named "E4")
http://www.garlic.com/~lynn/2011c.html#62

4331 might have used existing CKD disk ... or got new FBA disks, 3310s
or the larger 3370s.

for other topic drift ... IBM field engineering support required being
able to diagnose failed components at customer site. Level of
integration in 3081 no longer being able to apply probes as part of
diagnostic procedures. As a result, 3081 came with "service processor"
with probes builtin at manufacture time ... and a rudimentary user
interface was created for the "service processor".

In part, given the resources for creating a "service processor" specific
operating system and applications ... it was decided to use a special
custom modified vm/cms (release 6) running on 4331 as service processor
for 3090 (service processor menu screens were implemented in IOS3270).
By the time, 3090 shipped the "service processor" had been upgraded to a
pair of 4361s (with 3310 FBA disks).

I'm giving a talk at the next Hillgang meeting 16Mar on a history of
virtual machine performance. The original talk as given at the Oct86
SEAS (European SHARE) meeting held on Jersey.

I spent several weeks getting the talk cleared through management and
legal since it mentioned some performance comparisons between "PAM"
(page mapped filesystem that I had originally done for cp67 which wasn't
released) with existing HPO/3081 system. In the mid-80s there were some
issues with a few releases of HPO performance enhancements during the
early to mid-80s. One of the issues in the mid-80s was some new people
in the VM group having done some comparison tests on HPO where the page
replacement infrastructure was reverted to effectively the oiriginal
global LRU (not *local*) changes I had made in the late 60s to
CP67. This had been hot topic (not just the page replacement changes)
and there was possible some concern that some of the details might
spill out in the talk

The original talk was scheduled for an hour ... but spilled over into a
five hour session that night at SCIDS (in a overflow room next to the
SCIDS room ... people could easily wander back and forth).

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM and the Computer Revolution

"Charlie Gibbs" <cgibbs@kltpzyxm.invalid> writes:
Most typewriters had a ratchet mechanism that allowed you to move
the carriage backwards by hand; you only had to use the release
knob if you were moving forward.

Say, who remembers mechanical tab stops?

2741 had them ... 2741 was effectively beefed up selectric with computer
interface.

cp67/cms script ... there was a statement that could specify where the
tab stops had been placed (otherwise would assume default every ten
spaces) for handling of "real" tabs.

Anne & Lynn Wheeler <lynn@garlic.com> writes:
I spent several weeks getting the talk cleared through management and
legal since it mentioned some performance comparisons between "PAM"
(page mapped filesystem that I had originally done for cp67 which wasn't
released) with existing HPO/3081 system. In the mid-80s there were some
issues with a few releases of HPO performance enhancements during the
early to mid-80s. One of the issues in the mid-80s was some new people
in the VM group having done some comparison tests on HPO where the page
replacement infrastructure was reverted to effectively the oiriginal
local LRU changes I had made in the late 60s to CP67. This had been hot
topic (not just the page replacement changes) and there was possible
some concern that some of the details might spill out in the talk

IBM and the Computer Revolution

hancock4 writes:
The Xerox could quadriplex on a page, which meant four pages on one
8.5x11 piece of paper. That was done, as someone mentioned, for
archival file reports. But they were awfully hard to read.
Generally, archival stuff was created directly on microfilm or
micofiche; there were machines available since the 1970s that did just
that. With a good reader, the micofiche was easy and crisp to read
and saved an enormous amount of space.

san jose plant site had microfiche printer ... could route output to the
printer ... and get output back in bldg. 28 the next day or so.

I would have a fairly current complete copy of vm370 source listings as
well as bunch of other documents. routing output to the microfiche
printer ... you could specify the header output printed on the fiche
(which was large enuf to be human readable with fanning thru the cards).

I no longer have the reader ... but I believe I have a couple hundred of
the microfiche in box someplace.

the above example doesn't show ability to have something embossed across
the top that was large enough to be human readable. next time I'm going
thru boxes ... I'll see if I can take picture of some of the microfiche.

re: shuttle; i was in the viewing stands when the last shuttle went up
thursday ... that wasn't you in a private airplane in the launch space
that ignored radio messages & had to have a chase plane sent after
it?????

IBM and the Computer Revolution

greymausg writes:
I am told that ypu can get people to 'clean' your HD's
after your death. Now, how will these people know you died?
(After all , this is the sort of expertise that one dosn't
get at the crossroads).. You would have to send a message, "
I expect to die at 8.30 on the 25th February, please call soon
after and erase my (whatever, mp3, jpg) from the HD"

however, there was a failure mode where the member possibly was just
playing dead ... before recovery could be completed, it was required to
cut the dead member from the configuration .. scenario that it was just
in suspended animation in front of critical operation, once recovery
started had to preclude the possibility of a reviving (suspended) member
from proceeding to perform the critical operation.

in two-way operation this included a "reserve" on all disks to lock out
any possible disk i/o that a possibly member in suspended animation was
about to do. in an n-way operation, things get more complex ... since a
"reserve" is designed to lock out everyone except the one issuing the
reserve. what is desired is somewhat the reverse of a "reserve" ...
allow everbody but identified member(s).

there was some fiddling in a HYPERChannel configuration to achieve
this ... where HYPERChannel was being used in both inter-processor
communication and disk i/o. recent mention of NCAR (HYPERChannel A51n,
remote device adapter, simulated mainframe channel to disk controller)
http://www.garlic.com/~lynn/2011b.html#58 Other early NSFNET backbone

There was some work on HiPPI switch (for use with IPI disks) to provide
an equivalent "fencing" function in the switch (i.e. lock-out a
suspected member that had died; basically if suspected of having died
... make sure they are truely finished off), which, in conbination with
HiPPI-switch support for 3rd party I/O transfers ... would allow for
roughly equivalent operation to that provided in the HYPERChannel
environment.

RISCversus CISC

Anne & Lynn Wheeler <lynn@garlic.com> writes:
guy running TPM effort was in front row ... so I quiped that it was nice
to see TPM was starting to look more & more like my chip ... and he
quiped back that I didn't have a committee of 200 people helping me with
the design.

NSA Winds Down Secure Virtualization Platform Development; The National
Security Agency's High Assurance Platform integrates security and
virtualization technology into a framework that's been commercialized
and adopted elsewhere in government
http://www.informationweek.com/news/government/security/showArticle.jhtml?articleID=229219339

IBM and the Computer Revolution

Patrick Scheible <kkt@zipcon.net> writes:
Yes, that is why I blame MS. They should have either rewritten DOS or
released a new (but upwardly compatible) OS to take advantage of the
memory protection hardware that was available from the mid-80s on.

in the 80s, I did a cms to unix mail/822 translation exec that I
distributed on the internal network. from long ago and far away:
REMAIL can be used to forward all CMS mail to your RT via the SMTP
mail gateway.

The keyword variables are optained from GLOBALV group "SMTP"
if not specified on the command line. If no specification
is available, user will be asked for one (& the GLOBALV SETP
function invoked). Command line specification is keyword without
imbedded blanks (and overides any GLOBALV settings, but doesn't
reset them).

REMAIL will examine every spooled file in your reader. When mail files
are found (NOTE, PROFS, VMSG, MAIL, etc), they are read, reformated
and forwarded to the SMTP mail gateway.

RMFORW

RMFORW is an EXEC that when invoked will run continueously. It will
invoke REMAIL EXEC anytime a spool file arrives in the reader. This EXEC
also uses SMSG to intercept immediate messages for forwarding. While
executing RMFORW will accept the following terminal input as commands:

DISC - Disconnect from terminal'
EXIT - End the mail and messaging forwarding shell'
HELP - Display this message'
QUIT - Same as EXIT'

RMEXIT is a "user exit" EXEC that can be modified to selective bypass
processing of files/messages from specific userids.

REMAIL invokes special processing if a 822 mail file arrives from the
specified SMTP mail gateway that originated from the same address that
CMS mail is being forwarded. Rather than "looping" the mail, the file
is assumed to be a request to execute a CMS command. After validating
the 822 mail format, a "X-cms-command:" line is executed as a CMS
command after deleting the file (note: Subject: is no longer handled
as a CMS command).

....

... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Future System

There was a specially modified VM370 system that had softcopy of some
number of FS documentation ... so that documents could only be read on
local 3270 terminals and not allowing printing and/or copying the
material in any way.

I had gone by Friday afternoon to get setup for some off-shift weekend
test time in that machine room and they taunted me that even if I was
left alone in the machine room, "even" I wouldn't be able to access
the documents. I mentioned something about 5 mins ... most of the time
disabling all external access to the machine; I then modified a byte
in storage so that everything typed would be treated as a valid
password. I pointed out that front panel would need some sort of
authorization infrastructure and/or documents encrypted.

from ibm jargon:
FS - n. Future System. A synonym for dreams that didn't come
true. That project will be another FS. Note that FS is also the
abbreviation for functionally stabilized, and, in Hebrew, means zero,
or nothing. Also known as False Start, etc.

from ibm jargon:
TIME/LIFE - n. The legendary (defunct since 1975) New York Programming
Center, formerly in the TIME & LIFE Building on 6th Avenue, near the
Rockefeller Center, in New York City. For many years it was the home
of System/360 and System/370 Languages, Sorts and Utilities. Its
programmers are now primarily in Kingston, Palo Alto, and Santa Teresa
(or retired).

... snip ...

there was joke that the Burlington Mall vm370 development group (see
up thread) was put under the same executive responsible for the
"TIME/LIFE" shutdown ... and therefor it should have been obvious to
the Burlington Mall group as to what was coming.

Apparently the plan was not telling the group until the very last
moment, maximizing the number that would be moved to POK (to support
mvs/xa development) and minimize possibility of finding alternatives
in the Boston area. However, the information was leaked to the group
... and there was then a witch hunt to identify the source of the leak
(creating an extremely paranoid atmosphere in the bldg).

--
virtualization experience starting Jan1968, online at home since Mar1970

from above:
In 1928, IBM introduced a new version of the punched card with
rectangular holes and 80 columns. This newly designed "IBM Computer
Card" was the end result of a competition between the company's top two
research teams, working in secrecy from one another.

... snip ...

and related to FS thread drift ... from ibm jargon:
FS - n. Future System. A synonym for dreams that didn't come
true. That project will be another FS. Note that FS is also the
abbreviation for functionally stabilized, and, in Hebrew, means zero,
or nothing. Also known as False Start, etc.

If IBM Hadn't Bet the Company

Peter Flass <Peter_Flass@Yahoo.com> writes:
From what I heard about it at the time - and AS/400 seems to show - it
had a lot of good ideas, if somewhat ahead of its time. On the other
hand, if implemented by the same people who brought you SNA ... (not a
system, not a network, and certainly not an architecture)

If IBM Hadn't Bet the Company

"Joe Morris" <j.c.morris@verizon.net> writes:
h'mmm...that sounds like it might have been in Mike Cowlishaw's "dictionary
of the IBM language".

Speaking of Mike, didn't I see somewhere that he had retired a few years
ago?

yes & yes (ibm jargon is another name for it) ... for some reason he is
on the HILLGANG mailing list ... which recently mentioned I'm giving a
talk at the next meeting ... prompting him to send some email this
morning and we exchanged several.

(also) from ibm jargon:
BYTE8406 - bite-eighty-four-oh-sixv. To start a discussion about old
IBM machines. forum

BYTE8406 syndrome - n. The tendency for any social discussion among
computer people to drift towards exaggeration. Well, when I started
using computers they didn't even use electricity yet, much less
transistors. forum n. The tendency for oppression to waste resources.
Derives from the observation that erasing a banned public file does
not destroy the information, but merely creates an uncountable number
of private copies. It was first diagnosed in September 1984, when the
BYTE8406 forum was removed from the IBMPC Conference.

--
virtualization experience starting Jan1968, online at home since Mar1970

re: hpo; spent some time last tues. with people putting global LRU
page replacement algorithm back into VM/HPO (it was taken out by the
hpo3.4 group). they have very good performance comparison with
hpo3.4. They are almost done with implmentation of correct global LRU
replacement algorithm for >16meg real memory support (the hpo2.5
support >16meg real memory screwed up the global LRU replacement
algorithm ... although they thot the code was similar ... small
changes in the way some bits were tested ... resulted in algorithm
other than global LRU being implemented).

It looks like will have page replacement algorithm put back to the way
it was prior to HPO2.5 (i.e. >16meg support & swapper support) and a
>16meg support implemented with true global LRU page replacement ...
performing much better than the swapper hpo3.4 stuff & hpo 2.5 >16meg
support.

I also found out from the people working on putting my global LRU page
replacement algorithm back in, that IBM handed out 6 OIAs for removing
it (I had previously believed that there was one or two, but wasn't
sure). It is funny since prior to releasing HPO3.4 ... they were
claiming it was over 80% SYSPAG code written by "Lynn Wheeler" to clean
up large portions of various pieces of CP.

I am planning on changing my XA dispatcher to execute the SIE
instruction with I/O interrupts disabled (external interrupts will
still be enabled). The SIE instruction is a very expensive
instruction and I want to give the virtual machine a chance to do some
productive work before taking an interrupt. With I/O interrupts
disabled, the virtual machine will get to run until it relinquishes
control to CP or hits the end of the dispatcher timeslice. The I/O
supervisor already uses the TPI instruction to process all pending
interrupts before returning control to the dispatcher.

Do you have any thoughts on the matter? I have read CPDESIGN FORUM on
the IBMVM disk, so I know what has been discussed there. Do you have
a ballpark figure for the maximum allowable dispatcher timeslice which
would allow satisfactory I/O throughput? I am thinking about
disabling I/O interrupts for a maximum of 10 milliseconds. Another
alternative would be to have DMKSTP set/reset the interrupt mask based
upon the observed I/O interrupt rate.

above includes old email referencing significant resources poured into
vmtool to making it available to customers (as opposed to one person
that enhanced vm/sp/hpo to support 370/xa).

I had done something similar to dispatching disabled for i/o interrupts
in my original resource manager ... on heavily loaded system to minimize
effects that asynchronous interrupts had on cache hit ratios.

and from ibm jargon
conferencing facility - n. A service machine that allows data files to
be shared among many people and places. These files are typically
forums on particular subjects, which can be added to by those people
authorised to take part in the conference. This allows anyone to ask
questions of the user community and receive public answers from it.
The growth rate of a given conferencing facility is a good indication
of IBMers' interest in its topic. The three largest conferences are
the IBMPC, IBMVM, and IBMTEXT conferences, which hold thousands of
forums on matters relating to the PC, VM, and text processing,
respectively. These are all open to any VNET user. append, forum,
service machine

... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

Hillgang -- VM Performance

on 02/23/2011:
and how many of these areas evolved over the last twenty years.

I'm not sure about the last 20yrs ... I may have to do some research
between now and the talk. For the previous 20yrs, I reverted things at
least twice.

A lot of stuff that I had done for cp67 as undergraduate in the 60 and
was released in cp67 was dropped in the simplification morph from
cp67->vm370.

I continued to do a lot of 370 stuff all during the future system
period (when 370 stuff was being killed off). With the demise of
future system, there was mad rush to get stuff back into the 370
product pipelines ... some amount of stuff I had been doing was picked
up and released in vm370 R3. Then some more was selected to be
released as my "Resource Manager".

Then in the mid-80s, it was taking several weeks to get management and
legal approval for the SEAS talk. There was some dustup about some
amount of the changes made in HPO2.5, HPO3.4, etc ... and reverting
again to way I had done things in CP67 ... and they were apparently
worried that some of that would leak out in the talk. One was with
global LRU page replacement ... that somebody in the development
group was reverting to cp67. A couple yrs earlier, somebody was trying
to get a Stanford PHD in the area of global LRU and being opposed by
"logal LRU" forces in academia. It took nearly a year to get
management approval to send a reply regarding work I had done in the
60s in global LRU. part of that reply
http://www.garlic.com/~lynn/2006w.html#email821019in this old post
http://www.garlic.com/~lynn/2006w.html#46

As an aside ... there was post in comp.arch within the past two days
about processor hardware caches using very similar strategy for
replacing cache lines.

for other drift ... after presentation about z/VM cluster software
talk a couple Hillgang meetings ago ... I posted about "From The
Annals of Release No Software Before Its Time" about cluster support
having been done for internal HONE support in the late 70s (more than
30yrs earlier):
http://www.garlic.com/~lynn/2009p.html#43

IBM and the Computer Revolution

Peter Flass <Peter_Flass@Yahoo.com> writes:
We made little bombs by putting matchheads between two large bumber
bolts with a nut. The thing exploded when tossed against something
and frequently blew apart into three pieces. It would have been
pretty easy to put out an eye.

Wow... What a torrent of ideas you've shipped out here while I was off
skiing Tuckerman! On another subject, I've been meaning to reply
concerning the distinction between changed and unchanged unreferenced
pages as regards moving them from below to above the notorious 16m
line. I agree with you 100%... the name of the game is global LRU,
and all that matters is reference.

The version of the glru prototype that ran here a while ago fixed much
of that area, while leaving the code which distinguished between
changed and unchanged pages alone. The "trick" was that all pages
that were going to be swapped were considered changed for purposes of
the page move, and that no private page would then ever appear
unchanged or unswappable (all pages read in singly must eventually be
swapped, while all pages read from swap sets are explicitly marked
changed by the destructive read). This still left the set of
non-private pages, (and pseudo pages), since they can not be swapped.
Some of these (system address space and virtual page zeroes) can not
be moved up for other implementation reasons, but shared pages were
still being denied their fair shot at a potential move up as opposed
to being immediately freelisted (and later reread). The latest
version of the prototype covers this.

I expect a benefit to trickle through to the extend code as well,
since now many of the pages it will find will be moved instead of
freelisted. Even if extend fails to find an unchanged page, now there
is still an excellent chance for non-loss-of-control extending.

with regard to above ... "EXTEND" is process when the cp kernel runs
out of available kernel storage and scavages a "pageable page" for
that purpose. original code in cp67 would just invoke the standard
page replacement algorithm ... which might select either a changed or
non-changed page purely on reference bit. The downside is selection of
changed page introduced delay (and other activity) to first write the
changed page to disk (causing delay which might result in kernel
failure with attempting to do "new" extend while extending). While I
was at Boeing ... I modified the CP67 kernel to use "BALR" linkages
between high-use kernel routines (including FREE/FRET, kernel storage
management). I also modified "EXTEND" processing to first search all
of pageable pages, for a non-changed page (to minimize kernel
failure).

I've just submitted some details on the GLRU line item to our patent
office. No telling whether I'll get lucky, but I took the liberty of
including you in the invention of the new management of the <16m area
(when that area is more constrained than storage in general).
So... don't be suprised if someone from Kingston looks you up. And
don't be suprised if they don't, for that matter. We can only hope.

re: yyyyyyy; xxxxxx in yyyyyyy modified vm/sp over a year ago to run
xa-mode ... and then started work on upgrading to 3.4/4.2 hpo.
yyyyyyy has been running the code in production for some of their
stuff. i've had a number of exchanges with xxxxxx. strong rumor was
that when kingston 1st heard of it, kingston management contacted
yyyyyyy and attempted to have it killed and all references to its
existence obliterated.

latest i've heard is that endicott has made some sort of offer to
xxxxxx and they were expecting answer this week.

from ibm jargon:
dual-path - v. To provide alternative paths through program code in
order to accommodate different environments. Since the CP response is
different in VM/SP and VM/XA, we'll have to dual-path that Exec.
special-case v. To make a peripheral device available through more
than one channel. This can improve performance, and, on
multi-processor systems, allows the device to be available even if one
processor is off-line.

--
virtualization experience starting Jan1968, online at home since Mar1970

Quadibloc <jsavard@ecn.ab.ca> writes:
It's true that AS/400 showed that the FS project wasn't a dead loss
for IBM. But the hardware independence and abstraction of the FS
design, which was also present in AS/400, resulted in less computing
power per dollar being delivered to customers.

There may also have been benefits to customers, but I can't help but
suspect that the chief benefits were to IBM in terms of making the
architecture more thoroughly proprietary and enhancing account
control. So, while IBM didn't waste it's money, I'm not sure I can say
it was based on "good ideas".

The rest of the industry, at least, has shown no inclination to copy
those ideas.

FS was doing single-level store ... ala earlier tss/360, multics, etc
... but got a lot of the details wrong. s/38 incorporated some of the
ideas ... implementing single-level store with single (48bit) virtual
address space (everything in the system existing in single address
space).

everything in the whole infrastructure being mapped into that single
(48bit) virtual address space ... somewhat contributed to s/38 scatter
allocation across all available devices; a s/38 backup require all
available disks as single operation ... and restore required all
available disks as single operation (major scaleup issues, say a 300
drive disk farm ... where single disk failure would require restoring
all 300 disks). the single-disk-failure mode issues with scatter
allocation was big motivator for s/38 to be early raid adopter in the
80s.

there have been various claims that intel i432 adopted similar design
to s/38 ... old post with quote from i432 intro (mentioning both B5000
and S/38):
http://www.garlic.com/~lynn/2000f.html#48 Famous Machines and Software that didn't

a big difference between i432 and s/38 (and as/400) ... was that i432
had a huge amount of complex stuff in silicon ... and any fixes required
a new chip (while s/38 & as/400 all that complexity was effectively
software).

mentions pulling together all the potentially participants in the NSFNET
backbone for 2day meeting at IBM ... however the internal politics was
kicking in ... and they were all called up and told the meeting was
canceled.

Irrational desire to author fundamental interfaces

EricP <ThatWouldBeTelling@thevillage.com> writes:
I was thinking that rather than looking at microcode
but an emulator accelerator. I was thinking about this
for the Alpha way-back-when they ran VAX and x86 emulators.

low & mid-range 370s were all "microcode" ... effectively software
emulation that was about avg 10:1 ratio of native to 370 instructions
(aka 10mip processor to delivery 1mip 370). estimates were that about
1/3rd of that ratio was dealing with the possibility of self-modifying
instructions; amdahl is perported to have had "macrocode" ... very much
370 but eliminating support for self-modifying instructions (and other
differences).

late 70s ... there was an effort to converge all the internal
microprocessors (not just 370 micro-engines ... but all the controllers
and other processors) to 801/risc (the "iliad" chips had
features/extensions specifically for supporting emulation). misc.
past posts mentioning 801, risc, romp, rios, iliad, etc
http://www.garlic.com/~lynn/subtopic.html#801

there has been a few commercial vendors of 370 emulation ... running on
i86 and other platforms ... like
http://www.funsoft.com/

at least some of the 370 emulators ... included support for JIT "370
compile" ... sequences of 370 code (presumably frequently executed)
translated to native for direct execution (with lots of tracking for
"self-modifying" events).

more recently there have been mainframe discussions of some sort of x86
emulator (on mainframe) capable of executing windows.

--
virtualization experience starting Jan1968, online at home since Mar1970