freelance wrote:
Hi
on what manual can I find an explanation for the "sense data" reported
by EREP ?

some of the sense bits are generic ... most sense bits are device
specific. there was a ios3270 version of 360 green card done long ago
and far away.

ios3270 was part of a menua application package done by theo alkema at
the uithoorne (EU) HONE system that also included the 3720 screen
applications fulist and browse. HONE was the internal world-wide
marketing, sales, and field support vm/370-based system.
http://www.garlic.com/~lynn/subtopic.html#hone

many people may be familiar with ios3270 menu applications as the
screens in the 3090 service processor (the 3090 service processor was
actually a pair of 4361s running vm/370 with the service screens
implemented in ios3270).

the 360/67 "blue" card included several panels that gave sense bit
definitions for several devices. long ago and far away, I had updated
gcard with sense data for several devices.

Average Seek times are pretty confusing

"cambpellster.john@gmail.com" writes:
I am trying to find the average seek time of the following
single-platter disk. I know that the seek time is 1ms for every 100
tracks traversed. The total number of tracks on one side of the platter
is 30,000. Then what is the average seek time?
I think it is: 1ms/100 .... it makes sense, right?

you need to take the avg. of all the seek distances. many times the
avg. seek distance is taken as half the maximum seek distance. given
that you know the avg. seek distance, then you can calculate the
avg. seek time (based on the avg. seek distance).

many disks have situation where it takes longer per track for short
seek distances than for long seek distances (having at least to do
with startup inertia and acceleration of the arm from rest).

there was once a couple high-school kids visiting this dam late at
night (for the first time) ... and the driver turned onto the highway
across the top of the dam and floored the gas pedal ... attempting to
see peak speed across the top. as they were coming to the other end of
the dam, one of the passengers mentioned that the driver should slow
down ... and the driver said there was no problem that they would just
coast thru the upcoming tunnel.

turns out that it wasn't a tunnel ... it was a garage where they
parked a large crane ... with a solid concrete wall in the back. they
barely avoided an extremely distructive instantaneous deaccleration
what was the avg. speed across the top of the dam?

"Bob" writes:
Doesn't this assume the disk is full? It also assumes that
disk accesses are random, which is far from true in most
real systems. Files that are written close together tend to
be read close together.

when i was an undergraduate ... i reorganized a mainframe filesystem
so that the avg. elapsed time for a standard job at the university
improved by nearly a factor of three. the major thruput bottleneck was
seek time ... significantly reducing avg. seek time resulted in a
corresponding thruput increase (and decrease in elapsed time, from a
little over 30 seconds to a little under 13 seconds).

note, in the reference presentation in the above, there is several different
items discussed.

one is the re-organizing of the mainframe batch filesystem and disk
layout to significantly increase the thruput from well-over 30 seconds
(in a default, standard, out-of-the-box organization) and the 12.9
seconds (in the re-organization).

the other work described in the presentation was significantly
rewriting major pathlengths in the virtual machine cp/67 kernel.to
reduce the cpu utilization (and the combined result of running the
virtual machine cp/67 kernel in combination with the batch operating
system).

not described in the presentation ... was that i had also rewritten
portions of the disk i/o scheduling to introduced ordered arm seek
scheduling (aka when there are multiple queued i/o requests for the
same disk, the queue is re-ordered to sort the i/o requests for
minimum arm seek distance). the change to ordered arm seek queueing
(from FIFO) tended to further reduce the arg. arm seek distance under
heavy load. with ordered arm seek queuing, the avg. seek distance
tended to decrease as the load increased.

another change that i made in the paging implementation was slot
chaining, when there was independent I/O requests for different
records on the same "cylinder" (records on the same track or records
on different tracks at the same arm position in multi-platter
devices). this had some benefit in the thruput in the records
processed per second on moveable arm disks ... but it showed even
greater thruput in the numbers of records per second in "fixed-head"
device (i.e. device that had a read/write head per track so there was
no track-to-track arm motion). request chaining on the 2301 fixed-head
device (used in cp/67 systems) increased the peak record transfer
thruput from approx. 80 4k records per second to 300 4k records per
second.

finally here is collected postings on numerous changes to filesystem
implementation code i made in the early 70s ... one of the changes was
support for contiguous allocation which tried to allocate records for
the same file at contiguous physical disk locations (increasing
the ability to do mutliple request record chaining):
http://www.garlic.com/~lynn/submain.html#mmap

improving contiguous allocation can increase the effectiveness of
multiple record chaining (i.e. multiple records transferred in a
single operation). again this can result in the avg. seek distance
increasing, since multiple request transfer may appear as a single
seek request (depends oon how the infrastructure statistics treats
zero arm motion operations and multiple record transfer operations).

David Boyes writes:
Don't know whether anyone else may ever need this, but FYI: Appendix 1
of the PVM Admin and Operation manual does document the buffer layout
and sequence of events for PVM link communications. There are also some
examples of applications talking to PVM on the old SHARE mods tapes that
were quite helpful.

Stay tuned. I should have some interesting results to present shortly.

Jeremy Linton writes:
Again, i did some benchmarking on a true "average" seek time
for an application I was involved in writing a few years ago and was
amazed that the average seek time for the application was a factor
of 10 less than the average seek time published by the drive
manufacture. So that number is really sort of an average "worse"
case figure. Of course your mileage may vary...

an early full-track controller cache was 3880-13 for 3380 drives. a
fairly typical 3380 configuration was ten 4k records per track. for
sequential file reads ... the 3880-13 cache would do a miss on the 1st
record read for track and the subsequent nine record reads would all
then be "cache" hits. based on this characteristic there was 3880-13
marketing statements about having 90percent hit rate.

however, somebody pointed out that if the application was to do large
buffering resulting in full-track sequential reads rather than
individual record sequental reads ... the same, identical operation
would drop from 90percent cache hit to zero percent cache hit (and run
faster).

the avg. seek time is actually based on the avg. seek distance.
real-life seek distances can have enormous variance. in many cases,
avg. seek time based on some assumed seek distance distribution ... is
more for comparison of different drives than accurately predicting
exactly how a particular environment is going to operate.

avg. access time tends to be a combination of some assumed seek
distance distribution and avg. rotational delay.

when i got in trouble pointing out to the disk division that the
relative system thruput of disks had declined by a factor of ten over
a 10-15 year period ... they assigned their performance & modeling
organization to refute my statement. after a number of weeks, the
group came back with the finding that I had actually understated the
problem. part of this was that the other components of the system had
performance increases much larger than the avg. access time
performance increase (and/or other thruput measures) of disk
technology.

Page fault question (zero-filling)

"ldb" writes:
To make matters worse, deep within the bowels of the memory-management
code, linux basically has a line of code like this:
if(in_interrupt())
do_slow_zeroing();
else
do_fast_zeroing();

long ago and far away ... when cp67 was first delivered to the
university, the method of zero filling was to have a special all zeros
page on disk. address space virtual pages were all initialized to
point to that page on disk (so on on the 1st fault, the special zeros
page would be read from disk).

i changed that (when i was rewriting lots of other code) so that
address space tables were initialized to a special zeros indication
and then in the page fault handler, special code to do a (360) BXLE
loop where i initialized a bunch of registers to zero and did a (360)
STM/BXLE loop, clearing the page. at the time, on 360/67, it was the
fastest way that I could come up with clearing a block of storage to
zeros.

UDP and IP Addresses

"theinvisibleGhost" writes:
Thanks for your help, sorry for the slow reply I've not been around.
I must be misunderstanding something still however. From
what I read I thought that UDP datagrams were placed in the data area
of the IP datagrams? Are you saying that UDP datagrams actually
replaces the IP datagrams?

part of this is how you look at the implementation ... higher-level
transmission frequently is encapsulated by headers/trailers as they
proceed down the protocol stack (and then the outer layers are
stripped off at reception as they proceed back up the protocol stack
... peeling an onion).

one of the long time network protocol performance issues has been do
you copy the data from the higher level stack into your own buffer and
then wrapper it with lower-level control information ... or can you
use scatter/gather i/o techniques with i/o address pointers and avoid
doing buffer copies as you proceed down&up the protocol stack.

scatter/gather would have a series of i/o addresses and lengths that
are processed sequently by the i/o hardware moving the data along as a
continuous stream (but from different, non-contiguous locations in
memory). the protocol encapsulation as you proceed down the protocol
stack ... instead of physically creating a new copy of the data with
the lower-level protocol header/trailers ... just create a continuous
stream of data in the correct logical order by the way you order the
sequence of i/o addresses being processed by the i/o hardware.

a side issue somewhat raised by scatter/gather scenario has been
header versis trailer error checking codes. if you use header error
checking codes, the complete data stream has to be processed btfore
you can fillin the error checking field in the header. in trailer
error checking codes, the i/o hardware can keep a running calculation
of the error checking code as the data streams thru and then fill-in
the error checking field as the trailer passes by.

the headere/trailer error checking issue isn't directly tied to
scatter/gather issues ... but if you aren't worried about doing the
overhead of lots of buffer copies, then you likely aren't worried
about the overhead of the preprocessing error checking overhead. if
you are looking at scatter/gather to avoid buffer copy overhead
... then you may also be concerned about the overhead of calculating
error checking field.

R.S. wrote:
It was called 'OEMI' I forgot what I stands for, presumably Interface.
This book descibes Bus&Tag infterface, including plug construction,
signal characteristics, voltage levels etc. See GA22-6974

other equipment manufactureinterface ... or something similar ...
synonym for pcm ... plug compatible manufacture. somewhere along the
way, there had been some litigation by pcm vendors.

had come out and installed cp67 at the university. i got involved in
rewriting large portions of the kernel ... creating dynamic adaptive,
feedback scheduler (later customers would refer to it as fairshare
scheduler ... since one of the resource management policies was
fairshare), new paging algorithms, chain record, ordered seek queueing,
fast path, etc. i gave a presentation on some of the work at fall 68
share meeting in Atlantic City (presentation also included extensive work done
on os/360 mft14) ... part of that presentation:
http://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14

cp67 shipped with 2741 and 1052 terminal support but the university had
these m33 teletypes (ascii) terminals ... so i added tty/ascii terminal
support. the cp67 2741/1052 support had dynamic terminal identification
... i.e. it would play sending various commands and playing with the
2702 sad command to dynamically identify the terminal. i figured what
the heck, i could add tty/ascii support in similarly ... so that cp67
could do dynamic terminal identification for 2741/1052/tty.

i had this grandiose idea that you could have a single number on phone
rotory and let all kinds of terminals dial the same number ... and the
system would dynamically determine terminal type. well, it turned out
that the dynamic terminal identification worked fine for 2741/1052 ...
but there was a problem adding ty. while 2702 supported being able to
associate any line-scanner with any port/line ... it had a restriction
that the oscillator setting the baud rate for each line/port had to be
hired-wired.

this sort of kicked off a project at the univ. to reverse engineer the
360 channel interface, build our own channel interface board and program
an interdata/3 to emulate a 2702 controller (somebody even wrote an
article blaming four of us for originating the pcm controller market).
one of the things that was implemented in the interdata/3 was dynamic
buad rate in the software line-scanner ... i.e. the software would
strobe the signal rise/lower on the line to dynamically figure out the
terminal baud rate. misc. past posts about 360 oem/pcm
http://www.garlic.com/~lynn/subtopic.html#360pcm

specific article mentioning future system project by somebody that
worked on it (The rise and fall of IBM, Jean-Jacques DUBY Scientific
Director of UAP, Former Science & Technology Director of IBM Europe)
http://www.ecole.org/Crisis_and_change_1995_1.htm

quote from above:
IBM tried to react by launching a major project called the 'Future
System' (FS) in the early 1970's. The idea was to get so far ahead
that the competition would never be able to keep up, and to have such
a high level of integration that it would be impossible for
competitors to follow a compatible niche strategy. However, the
project failed because the objectives were too ambitious for the
available technology. Many of the ideas that were developed were
nevertheless adapted for later generations. Once IBM had acknowledged
this failure, it launched its 'box strategy', which called for
competitiveness with all the different types of compatible
sub-systems. But this proved to be difficult because of IBM's cost
structure and its R&D spending, and the strategy only resulted in a
partial narrowing of the price gap between IBM and its rivals.

... snip ...

by that time, i had gone to work at the science center ... and i would
make some sarcastic comments about the similarity between FS project and
a long-playing "cult" film playing down in central sq. (something about
the inmates being in charge of the institution).

basically FS was going to completely replace 360 ... and was as
radically different from 360 than 360 had been from prior generations.
the focus on replacing 360 with FS somewhat dried up projects for 360
follow-ons ... so when FS was eventually killed there was a mad
scramble to re-invigerate work on 370 products. by that time, you were
starting to see plug-compatible processrs in addition to
plug-compatible controllers.

i remember a seminar that amdahl gave at mit to a large audience in the
early 70s. one of the questions was how did he justify getting
investment for his company. he described an analysis of the hundreds of
billions that customers had already invested in developing 360 software
and stated that even if ibm was to totally walk away from 360
architecture, there would still be enough customer 360 software until at
least the end of the century to keep him in business (at the time, the
end of the century was still almost 30 years away).

baiscally tape, vmfplc, and vmfplc2 put data on tape similarly; read
the FST, write the FST to tape, open the file, modify the file
characteristics to match the physical disk format ... file record
reads and physical data blocks then are the same.

processing the tape, read fst and save it, read physical blocks from
tape and write them to disk file (the file records then may, or may
not be the same as the physical disk records), when finished reading
each cms file on tape, close the file being written to disk and zap
the FST for the new disk file with the FST information read from disk.

vmfplc2 was update to add misc. new function and also handle FSTs for
both original filesystem (800 physical records) and EDF filesystem
(1k, 2k, 4k physical records) ... but there were some FST format
changes (and edf had filesystem call that avoided having to do some of
the FST fiddling).

i had done heavy modification of VMFPLC2 (for all sorts of reasons)
and called it VMXPLC and used it to implement a backup/archive
system. this was deployed at research, hone (vm/370 world-wide, sales,
marketing, field-support operation)
http://www.garlic.com/~lynn/subtopic.html#hone

and a couple other internal places.

this went thru a couple internal releases and then a project was
started to make it available to customers. with numerous enhancements
this eventually was released as workstation datasave ... i.e. it added
agents on distributed machines to do backup/resotre data transfer.
workstation datasave then morphed into ADSM and is now called TSM
misc. past posts, workstation datasave, adsm, tsm, etc
http://www.garlic.com/~lynn/submain.html#backup

Anne & Lynn Wheeler writes:
i had done heavy modification of VMFPLC2 (for all sorts of reasons)
and called it VMXPLC and used it to implement a backup/archive
system. this was deployed at research, hone (vm/370 world-wide, sales,

much larger blocking was done ... and also combining the FST record
with data records as single physical record (for small files you could
have majority of the tape as inter-record gaps, combinting the FST
record with the data block record would cut in half the inter-record
gaps).

i had done page-mapped filesystem for cms way back in cp67 ... along
with being able to define shared page stuff from the cms exectuable.
i would map the original cms filesystem semantics ontop of page-mapped
operations. later, i also modified the CMS edf filesystem semantics
ontop of page-mapped operations. misc. cms paged mapped filesystem
postings
http://www.garlic.com/~lynn/submain.html#mmap

if you had cms filesystem in a page mapped area ... and dealing with
old applications (that didn't deal in page-sized records aligned on
page boundaries) ... the implementation would push & pull the data
around so that it was transparent to the application that it was
dealing with a paged mapped filesystem. however, there were all sorts
of thruput and performance things that went on if filesystem i/o
happened to be in page record sizes and aligned on page boundaries. so
part of the changes for VMXPLC included doing some of that.

note that the basic filesystem changes were never included as part of
any mainframe shipped product (some of it was incorporated into some
version of xt370).

in the morph from cp67 to vm370, the standard system mechanism for
dealing with shared pages (shared segments) was to define them in a
special kernel paging area. the definitions were specified in a kernel
assembler module, DMKSYS. the process was to initialize an image in
virtual address space and issue a (privileged) "SAVESYS" command to
write an image from the virtual address space to the saved system
area. Access to these "named" systems was by specifying the DMKSYS
name in the CP IPL command.

the port of all the memory mapped stuff from cp67 to vm370 included a
bunch of stuff about handling all sorts of virtual address space stuff
... shared segments, non-shared segments ... dynamically loaded, etc.
While all the page mapped filesystem stuff wasn't picked up and
shipped ... there was a small subset of the vm370 kernel changes
picked up and released as DCSS (discontiguous shared segments). This
allowed loading of sections of virtual memory from a DMKSYS named
system w/o having to go thru the IPL command semantics.

not so much because they needed the filesystem performance
enhancements ... but they needed the capability of having shared
segments w/o using the IPL command (that reset the whole virtual
machine).

most of HONE applications (like configurators, as things evolved,
eventually all mainframe orders had to be run thru a hone configurator
before it was submitted) were implemented in APL ... and the systems
were heavily processor limited. APL interpreter was large amount of
code ... and prior to DCSS, the approach was to initialize a virtual
address space with cms along with the APL interpreter and save it as
something like CMSAPL named system. branch office logons would then
have automatic IPL of CMSAPL throwing them directly into the HONE APL
environment.

so all of that is standard system stuff ... except that they were
starting to find possibly 100:1 performance improvement by recoding
some of the APL stuff into compiled fortran. this required dropping
out of APL, running a cms fortran program and then reentering the
CMSAPL environment. this was a practical impossibility if the ipl
command had to be issued by the branch sales people for the transition
into and out of APL.

so the paged map filesystem support was installed at hone and rather
than having an IPL named system CMSAPL defined in DMKSYS, the APL
interpreter was a standard cms executable in cms page mapped
filesystem (with the appropriate indicators for shared page operations
... which were added to GENMOD command, recorded in new module field
and supported by LOADMOD). then dropping into and out of APL to run
fortran applications became simple CMS application execution.

this was all before a subset of the shared page kernel changes were
picked up for DCSS and remapped to DMKSYS named systems.

Gerard 46 wrote:
If there were holes in zVM, they'd be closed. After all, ZVM is
designed to not let anyone out of their sandbox, even if you have
access to the source (as it was in the good ole days), and even back
then looking at the source, it was a hard thing to
do.
____________________________Gerard S.

a lot of attacks on systems in the past have frequently been some sort
of escalation of privileges. something has enuf privileges to place a
file somewhere in the system that some other entity with more privileges
will execute.

there was a security effort to make much of the documents only available
electronicly online via special cms systems (considered more secure than
having lots of paper flowing around). some of the people working on the
effort once made the rash statement that even if I was in the machine
room, "even" i wouldn't be able to access the documents. one of the few
times i rose to the bait, i countered with it might take five minutes.
turns out most of the time was spent disabling the machine from access
outside the machine room; because i was about to flip a bit in kernel
memory. the bit i flipped was in the branch instruction that followed
the return from the authentication checking routine (everything was
about to be taken as valid authentication).

I wrote a review of the new book by Vanguard founder Jack Bogle's for
clients that may be of interest to readers of IP.

"The Battle for the Soul of Capitalism" argues most of the forces that
produced the scandals among Enron, Worldcom, et al remain in place.
This means investors should expect another wave of scandals even as
the bad actors of the first wave go to trial.

Bogle believes the cycle will continue until investors get much more
active in holding CEO's and the financial intermediaries accountable.

in principle, auditors are looking for inconsistencies between
different information sources (which might be indication of fraud or
other improprieties). auditing internal corporate books somewhat
assumes that there are at least some that represent independent
sources and therefor might have inconsistencies. modern IT technology
can be leveraged to generate a complete consistent set of corporate
books. in face of being able to leverage IT technology to generate
complete consistency isn't going to be overcome by asking for more
information from the same source (more doesn't necessarily imply
independent)

looking for inconsistencies between independent sources then would
imply that sources of information from non-corporate sources would
have to be checked against corporate books (as opposed to assuming
that inconsistencies might exist in purely internal corporate books,
no matter how much the volume of information is specified).

that also somewhat has led to leveraging other sources of information
about possible fraud or improprieties ... like inside informers.

Zeroing core

Robert Billing writes:
However I believe that there was a weird instruction which walked
backwards through memory leaving zeros behind and finally consumed
itself at address zero.

something similar ... was the MVCL instruction introduced on 370. the
source and destination lengths were different and if the source length
was less than the destination length ... there was a pad-fill
value. so you could set the source length to zero, the destination
length to all of memory and the pad-fill value to zero.

the was an early microcode bug on the 370/125 for the MVCL
instruction. all 360 instructions would pretest start and ending
addresses for valid and abort with approriate failure if either was a
problem. MVCL was a new type of instruction introduced on 370 which
incrementally executed and allowed restartable ... and that things
like access to memroy checking was done incrementally as the
instruction got to that address. 370/125 bug was that it did the
pretesting rules from 360 ... instead of the 370 incremental testing
rules. vm370 used the MVCL at boot to both clear and test for end of
memory ... setting a destination length to max. 125 would abort w/o
executing any of the instruction (doing pretest of ending address
instead of testing each location as processed) ... so the result
appeared as if the machine didn't have any memory.

theer was also story about

somewhere in the history ... somebody decided that the TR and TRT
instructions had a bug. TR/TRT references a 256byte table ... the
value of each character in the source is used to index the 256byte
table. the 360 implementation did pretest on the starting address of
the table and assumed the ending address was +256 (for pretest before
starting the instruction). however, it is possible to use the
instructions with "short" tables for situations where the source input
is known to contain constrained values. one scenario is where the
starting address plus 256 crosses a page boundary and might result in
a page fault ... but the actual execution of short table wouldn't so
the TR/TRT insturction implementation was changed so that it tests the
+256 ending address and if there possibly is an access problem ... the
instruction is then pre-executed testing each of the possible source
character values to see if it results in a table address resulting in
some sort of access issue ... like a page fault. instead of
automatically taking an exception with access issue with +256 ending
address ... it prechecks each possible source character processing
(to see if it will result in calculated table position on the
other side of the access boundary).

i have some vague recollection of different MVCL microcode bug on some
machine when there was full 16mbytes of addressing memory (the 125
m'code bug i got involved in diagnosing, but this one i've just heard
stories about). microcode had been dependent on getting an invalid
memory address to stop the operation ... however with full 16mbytes of
addressable memory, the address wrap from 16mbytes back to zero and
kept going. the intention with 16mbyte length was to stop at end
of memory as opposed to wrapping around and actually wiping all of
memory.

Steve Gentry writes:
I think the shared/saved/NSS/DCSS segments is a dead issue.
We have also discussed the idea of using Virtual Disk and mapping a
minidisk to expanded storage. XC and all that entails.
We still have to keep in mind how each of these methods would be
implemented and how much the change would impact the program(s).
We're trying to make minimal changes to existing programs. i.e., The logic
for table processing is already in the programs. Where the table resides,
either above the line or below it are inconsequential. Changing the
program logic to use mdisks, be it v-disk or dataspaces requires a lot
more program change.

the original superset of DCSS, I had started on CP/67 and referred to
it as virtual memory management (VMM) ... this included handling CMS
disk i/o as paged mapped operations (as opposed to emulated real i/o
with CCWs, locking & unlocking pages, etc) and various enhancements
for sharing pages. recent posting
http://www.garlic.com/~lynn/2006.html#10 How to restore VMFPLC dumped files on z/VM V5.1

part of the idea for my vmm work came from having been exposed to some
tss/360 at the university ... and part from some of the multics stuff
going on in the 5th floor ... science center and cp67 stuff was on
the 4th floor
http://www.garlic.com/~lynn/subtopic.html#545tech

the original r/o cms shared pages done on cp/67 was at the page level
(not segment) and store protect was handled by fiddling the storage
keys. cms didn't use storage keys ... so virtual machine with shared
pages had the psw and storage keys fiddled; shared pages always had
storage keys set to zero; if the virtual machine attempted to set
storage key for shared page it was ignored, if the virtual machine
attempted to set zero storage key for non-shared page it was redone to
x'F'. if the virtual machine attempted to load a psw with a zero
protect key, it was reset to x'F'.

for 370 virtual memory, there was all sorts of new hardware stuff
defined, including r/o segment protect and selective invalidate
instructions. for r/o segment protect, the segment table entry (in
the virtual address space table) for a specific segment could have the
r/o bit turned on. the segment table entry then pointed to a set of
pages in a page table. the kernel could setup address space tables so
that some of the page tables were the same across multiple address
spaces (aka shared). however, whether or not a particular virtual
address space had read/write or read/only access to a shared page
table was set in the address space (segment) table (aka you could have
a mix of address spaces with r/o and r/w sharing of the same page
table).

for the cms morph to 370 ... the shared pages were reorganized into
segment and was planning on using the new 370 r/o segment protect
facility. however, before 370 virtual memory was announced an issue
arose with 370/165 hardware support for virtual memory. an
esacallation meeting in pok, the 165 engineers said that it would take
an extra six months to implement segment protect and selective
invalidates. the vs2 system people said that they had no need for
segment protect and that their system would never do more than five
page i/os per second and they could do batch page steal once a second
for that many pages (so the different of doing a global PTLB once a
second or doing five individual IPTEs once a second was negligable).
It was only vm370 that came out for segment protect and high paging
rates. the resolution was to drop the additional features from the 370
announcement so that virtual memory could support six months earlier.

this left cms in a bind ... the mechanism they were planning on using
for protecting shared segments disappeared from the machines. they
were forced to punt and return to the cp/67 mechanism of protecting
individual shared pages.

not too long latter, virtual machine assist (VMA) microcode assist was
introduced ... which included support for loading new PSWs in the
hardware (rather than taking a privilege interrupt into the cp kernel
and emulating the instruction). this and other VMA features
represented thruput enhancement (by doing some of the virtual machine
stuff directly in hardware rather than having to interrupt into the cp
kernel for everything). for cms, the problem was that the VMA
microcode assist didn't no about the fiddling rules for protect keys
and so VMA couldn't be turned on for CMS virtual machines running with
shared pages.

so coming up to vm/370 release 3, there was some work done to allow
CMS virtual machines to run with VMA. basically since there was no way
of actually protecting the shared pages ... the paradigm changed to
allow shared pages to be modified ... but catching such modifications
before any but the virtual machine making the change saw it. basically
on every task switch ... dispatcher would check to see if it was going
to stop running a virtual machine with shared pages. it would then
scan that virtual machine's virtual pages for any shared pages that
had been modified. if the dispatcher found any, it would unshare the
shared system (for the virtual machine that made the modifications)
and update the shared tables to indicate that the recently modified
page(s) weren't in real memory and would have to be paged in from
disk.

at this time, normal cms had a single shared segment with 16 shared
pages. on the avg. running a cms workload with VMA saved X percent cpu
... and checking 16 shared pages on every task switch cost Y percen
cpu ... where Y was normally less than X ... yielding an overall
thrput improvement of X-Y.

however, for actual vm/370 release 3, it was decided to also pickup a
subset of my VMM changes and release them as DCSS. Now part of the
changes picked up was that I had modified some amount of additional
CMS code to make it run in shared segment (rewritten part of the
standard cms editor to make the code shareable as well as some amount
of other code). so what shipped in release 3 was the DCSS changes
where CMS now normally ran with 32 shared pages and with VMA turned
on. However, doubling the number of shared pages, then doubled the
avg. checking overhead to 2*Y ... and while nominally Y<X; 2*Y was
nominally greater than X.

this was even further aggravated with shipping of multiprocessor
support. The checking gimmick was predicated on treating the shared
pages as private while a specific virtual machine was actually running
... and then catching and fixing any changes before switching to a
different virtual machine. with 2-way smp support, you could now have
two virtual machines running concurrently ... simultaneously accessing
the same shared pages (violating the priniciples of the gimmick). so
to perpetuate the fixup gimmick, real processor specific shared
segments/pages were defined. now, in addition to checking whether the
previous virtual machine had modified any shared pages ... before you
went to run a new virtual machine, you had to check whether the new
virtual machine had its virtual memory table entries pointing to the
shared segments specific for the processor it was about to run on.
Now, not only was 2*Y>X (i.e. overhead greater than savings from using
VMA), but the processor specific page table pointer fiddling really
blew it out of the water.

and was using it along with extensive use of the APL interpreter
running with something like 64 shared pages. the release 3 gimmick for
using VMA would mean that HONE was having to check nearly 100 shared
pages on every task switch. furthermore, since they workload was
heavily apl interpreter execution bound ... VMA assist provided them
negligible thruput improvement.

Would multi-core replace SMPs?

hzmonte writes:
What I meant by "SMP" is the "traditional" SMP, the "pre-multicore"
SMPs. That is, a computer with more than one sockets on its mothrboard
and each socket houses a processor and each processor has one core. In
other words, how does one compare a computer with 2 single-core
processors with one that with one dual-core processor, for instance?
Note that I am not comparing with two computers each with a single-core
processor. How about a computer with a quad-core processor and a
computer with 2 dual-core processors?

how 'bout the "traditional" SMP, the pre-LSI, pre-VLSI SMPs ... that
is a computer that filled a whole room

how do you compare a computer made up of chip technology that had 4
circuits per chip? say a two-processor 370/168smp.

how do you compare a processor chip with off-chip cache that is
shared with with another processor chip on the same board?
possibly in a configuration with 8 such boards (16 processors)
sharing memory?

how do you compare a chip with two independent processors sharing
onchip cache as well as offchip cache.

logically ... the operational design may be identical ...
differences are possibly latency and level of circuit integration.

how 'bout if you don't bother to slice & dice the wafer ... just leave
all the chips in single piece of silicon wafer ... lets go for
thousands of "cores" on single piece of silicon?

how 'bout hercules emulating a 370/168smp but is running on current
dual-core processor.

S/360

"robin" writes:
FYI, our s/360 was slower than the machine that it replaced for
small jobs -- yet the machine it replaced was 50 times slower (add
time 64uS). When LCS was put on the S/360, it ran even slower,
because the OS took up most of the fast memory, so that user
programs were loaded into slow memory.

our 360/67 at the univ. had significantly worse thruput than the 709
(tube machine) that it replaced. the 360/67 had 8-byte wide 750ns
memory (and instruction) cycle time (compared to 360/50 2-byte 2mic
memory, and most LCS was 8mic). 360/67 was essentially a 360/65 that
had virtual memory hardware bolted on.

a dominate workload was fortran student jobs ... there was 1401
front-end that handled unitrecord<->tape ... and you carried tapes
between the 1401 and 709 (this was 40 yrs ago, 1966). the 709 fortran
monitor ran student jobs (tape to tape) at a couple seconds per.

360/67 came in with os/360 and 2311 disks. job processing was
synchronous with unit record process ... read the cards ... write stuff
to disk, read stuff from disk, eventually execute ... print output
... do the next job. this was taking minutes per student job. most of
the time 360/67 ran as non-virtual memory 360/65 with vanilla os/360.
the 67 associative array hardware lookup (virtual to real address
translation) did add 150ns to 750ns basic memory cycle (900ms total).

by os/mft11 release, the univ got HASP, which decoupled the unit
record processing from the job execution. the other operating system
gorp of moving lots of stuff back&forth between memory and disk was
still resulting in student fortran job processing taking over 30
seconds.

i did a lot of detailed analysis and careful construction/placement of
operating system stuff on disk for optimized arm seek operation and
got the typical student fortran job elapsed time to a little under 12
seconds (still longer than they had run on 709) ... but almost three
times faster than it had been taking.

it wasn't until we got watfor monitor from univ. of waterloo that
student fortran job thruput started to exceed what it had been on the
709.

for a little drift ... i had done some of the work on gcard ...
an ios3270 version of the 360 green card ... and just recently
did something of a rough conversion to html
http://www.garlic.com/~lynn/gcard.html

"Del Cecchi" writes:
Please look up SMP. It is a architectural term, not a term related to
construction as you used it. Lynn tried to point that out to you in his
own way, as did I. If you can't understand this then perhaps you need to
do some studying. And for a newbie to tell someone who has been
participating a while what to do is presumptuous. So please don't
display your ignorance. Thank you for playing.

i was always told respect was something that had to be earned.

it used to be that the rash of newbie questions coincided with new
semester ... kids getting access to newsgroups from some campus
terminal for the first time ... and asking questions which frequently
turned out to be some homework assignment. there may be all sort or
newsgroups where novice and homework type questions are considered
appropriate ... however, it isn't normally considered as such here
(and was treated very unkindly ... trying to nip the disease in the
bud so to speak, making an example tended to be a good thing).

misc. collected past postings on tightly-coupled, multiprocessor, smp,
compare&swap, etc (including multiple references that compare&swap
was chosen because we had to come up with something that stood
for charlie's initials CAS):
http://www.garlic.com/~lynn/subtopic.html#smp

and various references to my pet VAMPS smp project (which was
canceled before announcement, we were told because we could only
forecast a $8-9b revenue over five years and the minimum requirement was
$10b revenue over five years)
http://www.garlic.com/~lynn/submain.html#bounce

tightly-coupled multiprocessing ... where SMP was either symmetrical
multiprocessing or shared memory processing. in the past there were
some genre of asymmetrical multiprocessors ... collections of shared
memory processors but not all with i/o capability (thus the reference
asymmetric, there were also sometimes called attached processors).
then there is the numa variety of shared memory processing. a
recent posting mentioning numa:
http://www.garlic.com/~lynn/2005v.html#0 DMV systems?

closely-coupled multiprocessing .... didn't tend to have cache
consistency, but data and serialization could be done with memory
access type semantics (but were all in software). say a collection of
3090-type processors that didn't share normal instruction/data memory
but might have a shared extended-store type box. some similarities to
numa ... but different.

loosely-coupled multiprocessing ... frequently referred to as clusters
these day ... data sharing and serialization was achieved with
io/message-passing semantics. also big in GRID.

Rob van der Heij writes:
From a pure technical point of view, swapping to DCSS is much more
elegant because you copy a page under SIE and don't step out to CP to
interpret a channel program. But the drawback is that the DCSS is
relatively small and requires additional management structures in the
Linux virtual machine memory. I see some goodness for very small
virtual machines, I think.

in one sense it is like extended memory on 3090 ... fast memory move
operations. however, real extended memory was real storage. dcss is
just another part of virtual memory. in theory you could achieve
similar operational characteristics just by setting up linux to have
larger virtual memory by the amount that would have gone to dcss
... and having linux rope it off and treat that range of memory the
same way it might treat a range of dcss memory. minor drift, recent
post mentiond 3090 expanded memory
http://www.garlic.com/~lynn/2006.html#16 Would multi-core replace SMPs?

the original point of dcss was having some virtual memory semantics
that allowed definition of some stuff that appeared in multiple
virtual address spaces ... recent post discussing some of the dcss
history
http://www.garlic.com/~lynn/2006.html#10 How to restore VMFPLC dumped files on z/VM V5.1

if the virtual space range only occupies a single virtual address
space ... for most practical purposes, what is the difference between
that and just having equivalent virtual space range as non-DCSS (but
treated by linux in the same way that you would treat a DCSS space).

note that in the originally virtual memory management implementation
... only a small subset was picked up for the original DCSS
implemenation, a virtual machine could arbitrarily changes its
allocated segments (contiguous or non-contiguous) ... so long as it
didn't exceed its aggregate resource limit. however, the original
implementation also included support for extremely simplified api and
veriy high performance page mapped disk access (on which page mapped
filesystem was layered)
http://www.garlic.com/~lynn/submain.html#mmap

... and sharing across multiple virtual address spaces could be done
as part of the page mapped semantics (aka create a module on a page
mapped disk ... and then the cms loading of that module included
directives about shared segment semantics).

note that one of the issues in unix-based infrastructure ... is that
the unix-flavored kernels may already be using 1/3 to 1/2 of their
(supposedly) real storage for various kinds of caching (which
basically gets you very quickly into 3-level paging logic ... the
stuff linux is using currently, the stuff it has decided to save in
its own cache, and the total stuff that vm is deciding to keep in real
storage). for linux operation in constrained virtual machine memory
sizes, you might get as much or better improvement by tuning its own
internal cache operation.

one of the things i pointed out long ago and far away about running a
lru-algorithm under a lru-algorithm ... is that that things can get
into pathelogical situations (back in the original days of mft/mvt
adding virtual memory for original vs1 & vs2). the cp kernel has
selected a page for replacement based on its not having been used
recently ... however, the virtual machine page manager also discovers
that it needs to replace a page and picks the very same page as the
next one to use (because both algorithms are using the same "use"
criteria). the issue is that both implementations are using the least
used characteristic for the basis for replacement decision. the first
level system is removing the virtual machine page because it believes
it is not going to be used in the near future. however, the virtual
machine is choosing the least recently used page to be the next page
that is used (as opposed to be the next page not to be used).

running a LRU page replacement algorithm under a LRU page replacement
aglorithm is not just an issue of processing overhead ... there is
also the characteristic that LRU algorithm doesn't recurse gracefully
(i.e. a virtual LRU algorithm starts to take on characteristics of an
MRU algorithm to the 1st level algorithm ... i.e. the least recently
used page is the next most likely to be used instead of the least
likely to be used). misc. past stuff about page replacement work ...
originally done as undergraduate for cp67 in the 60s
http://www.garlic.com/~lynn/subtopic.html#wsclock

with respect to this particular scenario or 2nd level disk access
... one of the characteristics of this long ago and far away page
mapped semantics for high performance disk access (originally done on
cp/67 base) was that it could be used by any virtual machine for all
of their disk accesses (at least those involving page size chunks)
whether it was filesystem or swapping area.

in the original virtual memory mapped paged-map implementation (that
predated DCSS, vm/370 release 3 just picked up a small subset), the
source of the system was on standard virtual machine accessed disk
that was accessed using page mapped semantics. infrastructure layered
on top could be standard filesystem semantics ... and the object
loaded could be managed with standard filesystem operation (the load
command specified both the mapping between paged mapped disk and
virtual address space as well as any sharing options ... with various
kinds of integrity rules applied).

the issue for cms then easily becomes having multiply managed kernel
image ... you create a new kernel image at a new location and update
the boot information to point to the latest kernel image location.
people booting cms after the boot pointer change get the new image,
people that had the earlier kernel image have the previous kernel
image.
http://www.garlic.com/~lynn/submain.html#mmap

in theory, linux kernel images and maint. could be handled in a
similar way ... just create a new boot image in a new location.

virtual machines then are modified to load a named system ... say
linuxredirect ... which is just a trivial piece of redirection code
which does the actual loading of named systems. the redirection code
is updated after some maint. process and the appropriate images have
been saved to available place.

Rick Troth writes:
Sounds a lot like XIP, which is based on EXT2.
Define a partition of the same size as the intended DCSS
(or slightly less), put your stuff there, unmount it,
use 'dd' or something like it to grab a snap-shot, and store.

i have two machines, firefox is running on both. if i x-windows from
one machine to another via ssh tunnel, and try and start a 3rd
firefox, it aborts after saying something about already present.

if i kill firefox on the target machine and try and restart firefox,
it says the same thing. if i have firefox running on the target
machine and not on the x-windows machine ... and do a firefox
(profile) start ... i can run two different firefox simulataneously on
the same machine (with different profiles) as long as the windows are
on different x-window displays.

at startup, there seems to be some early checking for window display
with the same name(?) ... even before it checks for -profilemanager
option.

bob.shannon@ibm-main.lst (Bob Shannon) writes:
You didn't say where. IBM sold its Cottle Road facility in San Jose. I
can't imagine how much it sold for, but given the value of property in
Silicon Valley, and given the potential cost of cleanup after 50 years
of disk manufacturing, they probably made out pretty well. I believe
the employees moved to SVL.

... when 85 cut thru the middle of the property, the plant rec area
came up on the other side of 85 ... as a result they had to put in a
special underpass between the main facility and the rec area. shortly
afterwards they sold off the rec area for development and it turned
into apts/condos ... and they filled in the underpass. in the same
time they sold off los gatos lab. (along with something like 200
acres of open land) ... which got torn down and turned into housing
development.

earlier, there had been consolidation of a number of a off-site leased
bldgs adjacent to main plant site (bldgs, 86, 96, 97, 98, etc which
also came up on the other side of hiway 85 when it was built).

in the mid-80s there was a predication that ibm world-wide business
was going to double ($60b/annuum to $120b/annum) and there was a
massive manufacturing facility construction program. one of those was
large (disk manufacturing) bldg. 50 on the main plant site. in the
later downsizing from the offsite bldgs, some were consolidated into
offices in bldg. 50 and others to santa teresa lab (bldg. 90, some 10
miles to the south). slightly related
http://www.garlic.com/~lynn/2005j.html#32 IBM Plugs Big Iron to the College Crowd
http://www.garlic.com/~lynn/2005s.html#16 Is a Hurricane about to hit IBM ?

one of the groups that got moved into bldg. 50 was the adsm group (much
of the organizations had previously been in bldg. 98). adsm had morphed
from workstation datasave faciilty (and has now been renamed tsm).
workstation datasave facility had grown out of a backup/archive
system i had written and deployed internally.
http://www.garlic.com/~lynn/submain.html#backup

in the photos from the above ... you can sort of see that the lab. was
built on a marsh that got extremely wet during the rainy season from
run-off (both surface and sub-surface) off the nearby hills. the
datacenter is below ground and when it was first built ... they had
significant water seepage problems into the machine room (separate
from the later flooding problems mentioned in the above). I found the
problem somewhat interesting since during college, I had a summer job
as foreman on a construction project that was located on similar
terrain ... large surface and sub-surface drains had been installed to
divert the water flow around the site.

Anne & Lynn Wheeler wrote:
in the mid-80s there was a predication that ibm world-wide business
was going to double ($60b/annuum to $120b/annum) and there was a
massive manufacturing construction program. one of those was large
bldg. 50 on the main plant site. in the later downsizing from the
offsite bldgs, some were consolidating into offices in bldg. 50 and
others to santa teresa lab (bldg. 90, some 10 miles to the south).

in the plant site picture from the air .... hiway 85 crosses the
picutre from top left center ... to mid-left edge. the railroad and
old hiway 101 (monterey rd) is from right center edge to bottom
middle.

you can just make out cottle rd overpass on hiway 85, horizontal
towards the top of the picture. most of the "old" plant site is top
half of the picture along cottle. including bldg. 10, 12, 14, 15, 26,
etc. various stories about disk engineering work in bldg. 14 & 15
http://www.garlic.com/~lynn/subtopic.html#disk

if cottle road were to be extended straight up the hill ... it would
almost intersect the almaden facility (but cottle terminates at the
bottom of the hill). to get to almaden research (from cottle rd
facility) ... you take the bernal rd thru santa teresa county park
(the other access is via harry rd from the almaden valley side). in
the right middle and bottom left pictures, harry road is seen heading
down into almaden valley (towards the top of the pictures) and bernal
road heading off to the right

the large bldg. in the left center of the image is bldg. 50 built in
the mid-80s (as part of the huge manufacturing expansion) for disk
manufacturing. in the later downsizing, the offices in 50 was used to
absorb some number of the people from off-site leased bldgs.

news report today claimed that auto industry restructuring includes
remaking them more competive and agile, trying to reduce elapsed time
from concept to showroom to 3yrs.

in the late 80s, there were statements about leveraging IT (and other
technology) for remaking themselves more competitive and agile,
reducing elapsed time from concept to showroom to 18-36 months
... compareable to that of some of the foreign imports (zoom forward
15+ yrs and the objective is now only 3yrs).

sometime around 1980, there was an article in east coast paper
(washington post?) proposing 100% unearned profit tax on the auto
industry. supposedly the foreign auto import quotas were to provide
the american auto industry breathing room to restructure and remake
themselves more competitive (and agile?) ... the article said instead,
they were taking the increased profits (that they were making in the
wake of the import quotas) and paying it out in dividends, bonuses,
and salaries (rather than investing in restructuring). of course,
agility is major boyd & OODA-loop theme
http://www.garlic.com/~lynn/subboyd.html#boydhttp://www.garlic.com/~lynn/subboyd.html#boyd2

Joe Morris writes:
Sigh...when did this happen? I've never been to Santa Teresa, but
I've got fond memories of its name, courtesy of the SHARE button
that John Ehrman created many years ago:

Santa Teresa
Ora pro nobis

and there is the story about renaming Santa Teresa Lab at the last moment.

i was on trip to DC the week before smithsonian air & space museum
(and santa teresa lab) were to open. normal corporate naming process
was for the closest post office. however, the week before the opening,
there was a demonstration on the steps of the capital by a group of
working ladies from san francisco who belonged to coyote organization
(which happened to also be the name of the closest post office to the
new facility). santa teresa is the name of the closest cross-street.

way back when, when i needed to do some stuff at stl, i would
periodically ride my bike from cottle to stl ... on santa teresa
(closest north/south street).

i objected to there being a head wind going south in the morning and
also a head wind coming back north in the afternoon. especially during
the summer there is interesting wind pattern between the hills on both
sides of the valley. in the morning the hotter air rising over the bay
pulls air from the south valley. By the afternoon, sun heating in the
south valley reverses the wind pattern (with the hotter rising air in
the south valley pulling air from the bay). the hills also form a
narrowing of the valley about half way between cottle and bailey
... the constriction accentuating the wind.

in the summer, the morning wind also can bring the strong smell of the
garlic fields around gilroy ("garlic" capital of the world).

4k/EDF is the EDF CMS filesystem introduced in VM/370 release 6 used
with 4k block option.

PAM/CDF is the original CMS filesystem modified to use 4K records
rather than 800 and page-map interface.

PAM/EDF is the EDF CMS filesystem modified to use the page-map interface.

CPU is total (combination of virtual and supervisor).

The test was run on standard production system with other activity. A
run consisted of doing the same exact operations involving pam/cdf
filesystem, then pam/edf filesystem, then 4k/edf filesystem. This was
repeated eight times and the avg. values and the minimum values used.

There is not a direct comparison between 4k/edf i/o and pam i/o. For
4k/edf, I/O is reported as the number of virtual i/o operations
(independent of number of records transferred). There is slight
variability in 4k/edf i/o based on state of the filesystem and the
number of contiguous records that might be involved for operation.
For pam i/o, reported is the number of 4k page transfers (which might
also include other paging operations for the virtual machine during
the period and doesn't directly indicate the number of physical i/o
operations).

Both PAM test performed better than 4k/edf because of 1) reduced CP
overhead for supporting the operation, 2) better CP logic for chaining
multiple 4k transfers in single I/O,

The minimum values are going to be closest to base operation excluding
interferance from other workload on the system. pam/cdf is nearly
twice as fast and approx. 1/4 less cpu for the workload compared to
the same workload running on 4k/edf filesystem.

"Tim Shoppa" writes:
It's at 41st and Wisconsin in NW DC. It is still standing. Although the
concrete tower is dwarfed by the big metal towers in the area around
it, it looks like the concrete structure could withstand anything but a
direct nuclear blast. Some of the ancillary buildings around the tower
have been torn down/rebuilt in the past decade.

The neighborhood that the tower is in is dominated by TV and radio
stations. The industrial warehouses and workshops on Wisconsin have
generally been converted into retail use over the past couple of
decades (although there are still a few car dealerships in that
neighborhood.)

AT&T had a microwave tower built from concrete on a high point in my
old neighborhood. It was later replaced by a steel tower. Indeed,
given the changes in the telecom world, I wonder if it is still there.

It looks to me that the concrete WU tower may be too hard to tear down
:-).

isn't there a big unfinished half-built tower off of connecticut(?)
somewhere around friendship hills(?) ... behind a whole foods(?)
store.

sometime in the early 70s(?, late 60s?), san jose had put in
(microwave) T3 (44mbits) collins digital radio. roof of bldg. 12 (on
the main plant site) had dishes to a repeater tower on the hill above
santa teresa lab (and down to STL) and to a repeater tower above los
gatos lab (and down to bldg. 29). there were a number of subchannels
split on T3. this is standard c-band ... when (elevated section) 85
was built, some number of people noted radar detectors being tripped
when passing line-of-site between bldg. 12 and the tower above STL
(tower above lsg went when that site was sold off for housing
development).

when almaden was built, something like six fiber pairs were run from
main plant site up the hill (instead of doing microwave), i believe
the drivers were initially oc3.

when ibm formed sbs (with comsat and aetna), there were something like
a dozen or two 10m dishes put into major sites around the US that ran
T3, this was also c-band. there was some runs-ins with the MIB when
the data aggregator (aggravator) was built and installed on the
network (doing bulk T3 encryption).

when big part of the ims group was moved out of STL to off-site
(leased) bldgs (96 & 97), instead of doing traditional remote 3270 for
access back into stl datacenter (really, really slow and tortuous), we
put in T1 channel extenders (using HYPERChannel) and ran "local" 3270s
at the site (subchannel over the T3 stl/bldg12 and the microwave
tail-circuit from bldg12 to bldg98 (and wires into the two adjacent
bldgs). detailed discussion:
http://www.garlic.com/~lynn/2005u.html#22 Channel Distances

we also did a number of terrestrial and satellite high-speed links.
in conjunction with sbs, hsdt designed a new tdma earth station and
put in an three-node satellite dish network with lots of high-speed
terrestrial circuits. it had 4.5m dishes at los gatos lab and ykt
research and a 7m dish in austin. this used a ku-band transponder on
sbs-4.

IBM microwave application--early data communications

"Tim Shoppa" writes:
That is the site exactly. Although from my usual perspective it's in
front of the whole foods. Several of the warehouses have been converted
into big-box stores in the past year-and-a-half.

random trivia drift ... the 7m hsdt tdma dish in austin was about a
mile up (north) burnet (several miles south of round rock and dell)
from the orignal whole foods at the intersection of burnet and 183

it was more like a medium sized health food store ... the big guys
like HEB had already started the super sized store craze.

i couldn't find softcopy for my presentation ... so last night, i
scanned the hardcopy. the pdf isn't bad, but the ocr processing was
only about 80-90percent correct, i had to manually correct 10-20
percent of it.

I did find softcopy of the vm project agenda.

my talk was suppose to be on the history of vm performance ... dating
back to spring of '68 when i was an undergraduate (38 years now). I
had an hour and 60+ foils. i managed to only get thru the first couple
foils ... so we scheduled a bof during scids that night ... and i
talked from 6pm to midnight (room off scids, with interruptions for
refreshments).

Anne & Lynn Wheeler writes:
random trivia drift ... the 7m hsdt tdma dish in austin was about a
mile up (north) burnet (several miles south of round rock and dell)
from the orignal whole foods at the intersection of burnet and 183

and for some drift in another direction ... the austin/lsg hsdt link
was credited helping with bringing in the original rios (rs/6000)
chipset a year early (by providing high-speed access to lsm logic
simulator at the lsg vlsi lab).

the lsm was one of the first and included clock (most of the later
boxes assumed constant/uniform clock cycle). including clock allowed
simulation of asynchronous clock chips as well as digital chips with
analog circuits.

Anne & Lynn Wheeler writes:
when ibm formed sbs (with comsat and aetna), there were something
like a dozen or two 10m dishes put into major sites around the US
that ran T3, this was also c-band. there was some runs-ins with the
MiB when the data aggregator (aggravator) was built and installed on
the network (doing bulk T3 encryption).

... was design a replacement encryptor .. that i could build for under
$100 and that could potentially change key on every packet (with the
header/address in the clear, i.e. level2 rather than a lavel1/link
encryptor).

The Racal-Milgo Datacryptor III uses a AMD DES chip plus some
proprietary logic for exchanging master keys using a public key
approach. The master keys are then used to encrypt the working keys
which are actually used for data encryption. I have run the off the
shelf units at 128kb full duplex through the V.35 interfaces contained
in the unit. They cost about $3,200 each, single unit price.

I have to emphasize they are link level encryptors (e.g. level 1)
rather than level 2 (e.g. only the data encrypted with addressing in
the clear). They won't do you much good for "session" or "file"
encryption.

The public key synopsis goes something like this -

1. At install time, two 60 digit prime numbers are acquired and
retained in storage. The 'seeds' are developed by a combination of an
operator pressing a button to sample a clock source along with
internal logic.

2. When a operator wants to move a master key online, he asks the
remote end for a one-time E key (56 bits plus 8 parity bits).

3. The remote end uses the two 60 digit prime numbers (with their 120
digit product) to create a E key and a D key. They are
"mathematically related to the prime numbers and to each other" to
quote the Racal documentation.

4. The E key can't be used for decryption and the D key can only
decrypt what has been encrypted with the E key.

5. The remote end retains the D key and sends the E key to the
'requestor'.

6. The local end generates a new master key, encrypts it with the E
key he just received, and sends it to the remote end.

7. The remote end decrypts the transfer with the D key he retained,
they exchange a test message for verification, and exchange a new
working key.

8. The E/D pair of keys are not reused. Each exchange of a master key
causes a new pair to be created. The E/D pair never leave the boxes,
and can't be read out or displayed. It takes about 8 seconds for the
hardware to do the decryption of the received master key using the D
key previously retained.

gilmap@ibm-main.lst wrote:
I wonder about teD's statement that there remains a performance
edge for VIO. I understand the paths for paging are highly
optimized. But with modern buffered and virtual DASD the
difference ought to be shrinking. And there's the offsetting
overhead of emulating CKD protocol in virtual memory.

the original prototype for the move from MVT to VS2 ... involved
crafting cp/67's CCWTRANS into the side of MVT (excp processing). the
problem is simulating real i/o operations in a virtual memory environment.

basically excp/svc pointed to channel program. CCWTRANS was called to
scan the passed CCWs, creating a shadow channel program that was
actually going to be executed. each CCW from the original channel
program was copied into the shadow channel program, any referenced
virtual addresses had the associated virtual page fetched (if
necessary) and fixed in real memory, the real address of the virtual
page was then stuffed into the shadow ccw. when CCWTRANS was complete,
the shadow channel program (rather than the passed excp channel
program) was scheduled for execution (the virtual pages were pinned at
real storage so they wouldn't move until after the shadow channel
program ... with real addresses ... had finished)

at completion of the I/O operation, the shadow channel program was
rescanned ... and each virtual page that was pinned in the CCWTRANS pass
was then unpinned.

one of the savings in V=R is that the passed excp channel program
could be executed directly since all of the addresses are "real" and
already pinned.

LPARs are essentially a subset of virtual machine ... with nearly
everything in hardware/microcode. LPAR support can get around the shadow
CCW stuff by slight hack to the hardware i/o support ... and having all
the memory for a LPAR contiguous. then the microcode just has to handle
base/bound relocation of every I/O address (i.e. rather than having to
translate every 4k virtual page to a real address, a fixed offset
address just needs to be added to every LPAR i/o address).

there were all sorts of savings for the page mapped interface ...
eliminating the scan, shadowed ccws, page fixing, schedule real i/o,
rescan, and unfix, etc (even for cms disk i/o which already had a
fastpath version of CCWTRANs that i had originally done on cp/67 as an
undergraduate).

recent posting on some old comparisons of paged-mapped versus emulating
CCWs ... from a presentation that I had given at the fall '86 SEAS meeting
http://www.garlic.com/~lynn/2006.html#25 DCSS as SWAP for z/Linux

Ketil Malde <ketil+news@ii.uib.no> writes:
Perhaps a distinction of exclusivity can be made, caches that only
contain copies of the "real" memory is different from local memory
that is just faster to access for local CPUs.

we had sort of a 16-way tightly-coupled project in the mid-70s that
didn't have cache consistency (was killed before announced/shipped).
much later rs/6000 had "oak" ... which was 4-way single chip rios w/o
cache consistency (single chip rios or rios.9 ... original power/rios
was six-chip set ... i have paper weight on my desk with the chips).

there was some amount of work done on closely-coupled for "lock
memory" ... basically loosely-coupled configuration but had special
memory box with memory semantics for lock protocols across the
complex. special memory had some amount of special fault
characteristics. current parallel sysplex stuff now has inherited a
lot of that work. overview
http://www-03.ibm.com/servers/eserver/zseries/pso/sysover.html

except for ims hot-standby ... it didn't see much uptake until
parallel sysplex.

a similar but different implementation was extended memory introduced
on 3090 ... basically a variation on numa to address distance latency
issues with physical packaging. it could be considered part of
multi-level store with software management. there was a special 4k
memory copy instruction into and out of "extended" memory.

pgut001@cs.auckland.ac.nz (Peter Gutmann) writes:
I think a better way of doing this (which I mentioned on the PKIX
list recently) is to not use policy OIDs (which are meaningless to
99.9% of all users) but to indicate in the cert how much the cert
owner paid for it. Someone who paid $495 for their cert is
obviously more serious about it than someone who paid $9.95, and
since HA certs are just a way for CAs to charge more for their
certs, the end effect will be "more expensive == more HA" anyway.

one of the things that we looked at was higher assurance
certifications. the issue is that certification is basically providing
trust issue for parties (aka consumers) that don't otherwise have a
basis for trust.

one problem was that the e-commerce activity was highly skewed ... a
few big merchants with the majority of the transactions and business.
their position was that they already had trust via other mechanisms
like brand, repeat business, etc.

the vast majority of the merchants ... and the most in need of
certified trust establishment ... only accounted for very small amount
of business. they had a difficult time with cost justification for any
really significant trust certification (fbi background checks on
employees, physical audits of their operations, strict manadatory
security processes ... both physical and dataprocessing, etc).

another problem was that the market niche was already occupied by
credit card rules that tended to re-imburse consumers if anything went
wrong ... aka merchants recoup expenses by the price they charge
consumers ... they were already paying premium on credit card
transactions to help support risk mitigation for customers. the
certified trust requirements are somewhat proportional to risk ... if
risk has been significantly bounded already ... there is much less
requirement for expensive trust mechanism.

it was in that early time period that we coined the phrase merchant
comfort certificates.

now you might talk about the certificates themselves being high
assurance ... as opposed to the certification process that the
certificates are supposed to represent. think of it as having a
diploma that is printed on absolutely tamper proof paper ... that has
been issued by some diploma mill that doesn't require the graduate to
have met any requirements other than paying for the diploma.

the multiprocessor 360s in the 60s were touted as being fully
independent. while they could be configured with shared memory
(processors having uniform access to all memory) ... the processors
had private i/o. the machines could be partitioned ... each having
portion of real storage as private dedicated memory.

a shared-memory multiprocessor could have the memory cleaved and the
resulting configuration could operate as loosely-coupled (shared i/o)
w/o having tightly-coupled shared-memory.

the exception was the 360/67 ... which not only had support for
virtual memory, but also had something called a "channel controller"
which allowed for allowing all processors shared access to all i/o
interfaces ... however, the channel controller could be configured to
cleave the configuration into independent units with dedicated memory
and dedicated i/o interfaces.

while waiting for delivery of 360/67, took a standard 360/40 and added
custom virtual memory hardware and used it to build the original
virtual machine operating system (cp/40). when their 360/67 was
delivered, they ported the software calling it cp/67.

at the science center, charlie was doing multiprocessor work on cp/67
and invented the compare&swap instruction ... which showed up on 370
machines (in the early 70s). it wasn't originally compare&swap
instruction ... however charlie's initial are CAS and so we had to
come up with an instruction mnemonic that matched his
initials. misc. past smp related posts
http://www.garlic.com/~lynn/subtopic.html#smp

"Schuh, Richard" writes:
As others have pointed out, accessing an active SFS directory can be
a big contributor. I confirmed this by letting a machine sit idle
for 2 hours while it had such a directory accessed. After 2 hours, I
released the directory and it was disconnected 15 minutes later. In
this case, I would expect to see VTIME incrementing because CMS is
accepting the interrupts. If they are IUCV interrupts, that measure
is blown out the door, too. Is there no such thing as an idle user
anymore?

cp kernel time charge to the virtual machine ... can be anything that
the kernel does on behalf of the process.

I had originally done the pageable kernel stuff on cp/67 (as an
undergraudate) ... by creating a dummy virtual machine id (called
system) with dummy virtual address space tables ... into which were
mapped all the kernel code. the end of the fixed kernel was
identified and then everything else was was allowed to "page". this
wasn't shipped as part of cp/67 but was picked up as part of vm370.

when I did the swaptable migration ... i created an abbreviated psuedo
virtual address space for each virtual machine ... primarily so that
the paging subsystem could be leveraged for moving data to/from disk.
basically nearly all control blocks could be copied into this dummy
virtual address space ... leaving only a small stub left in real
storage. at least one of the vm370-based time-sharing service bureaus
got very aggresive with this technique in the amount of resources that
an idle virtual machine had left behind in real storage:
http://www.garlic.com/~lynn/submain.html#timeshare

when idle virtual machine was found, the kernel overhead for moving
various control blocks out of storage to disk would be charged against
that virtual machine. however, it wouldn't be ongoing ... unless the
virtual machine was being periodically woken up for other reasons (and
the control blocks had to be brought back and then again removed
later).

Charging Time

Anne & Lynn Wheeler writes:
when I did the swaptable migration ... i created an abbreviated psuedo
virtual address space for each virtual machine ... primarily so that
the paging subsystem could be leveraged for moving data to/from disk.
basically nearly all control blocks could be copied into this dummy
virtual address space ... leaving only a small stub left in real
storage. at least one of the vm370-based time-sharing service bureaus
got very aggresive with this technique in the amount of resources that
an idle virtual machine had left behind in real storage:
http://www.garlic.com/~lynn/submain.html#timeshare

as real storage has gotten larger and larger ... there is less of a
need to use control block migration to address constrained real
storage limitations (unless really humongous amount of real storage is
involved for idle activity).

note however, once there is a really aggresive implementation of
moving all control blocks to secondary storage ... it was no longer
necessary to bring the virtual machine back into the same real
processor as it started with (in a loosely-coupled environment).

as some of the early vm370-based time-sharing service bureaus went
7-24 with world-wide customers ... they started to encounter issues
with scheduled maint. downtime. at least one of them used really
aggressive control block migration as basis for moving everything off
a processor that had scheduled pm downtime to another processor
complex in the same configuration (w/o customers seeing any service
disruption).

Anne & Lynn Wheeler writes:
now you might talk about the certificates themselves being high
assurance ... as opposed to the certification process that the
certificates are supposed to mention. think of it as having a diploma
that is printed on absolutely tamper proof paper ... that has been
issued by some diploma mill that doesn't require the graduate to have
met any requirements other than paying for the diploma.

aka, nominally a license/credential/certificate/diploma type document
is the representation of something ... as opposed to being considered
the something itself. there are somewhat separate issues involved with
the integrity/assurance of the document ... and the
integrity/assurance of the information the document is suppose to
represent.

so whether or not you have a counterfeit driver's license is somewhat
separate from whether or not the information in a valid driver's
license is correct. although, it is possible to make the case that if
counterfeit driver's licenses are common place ... then there isn't
much point in worrying about the validity of information in valid
driver's licenses. this possibly contributes to it now being common
place for officers to do real-time, online checks in almost any
circumstance (license's starting to be a vehicle for carrying an index
to the real, online information ... as opposed to being expected to
have all the real, real-time information).

one of the issues that came up in an high assurance discussion is what
are some of the fundamental high assurance issues in public key
operations.

some of the most fundamental assurance issues (that nearly all other
operations are scaffold on) are

1) assurance/integrity level of the crypto
2) assurance/integrity level of the keygen process
3) assurance/integrity level of the environment protecting the
private key
4) assurance/integrity level of the environment that digital
signatures occur in

for digital certificates, this would apply to both the assurance
issues related to the certification authority as well as the assurance
issues related to the entity that a digital certificate has been
issued for. to build an assurance infrastructure ... you first start
with certifying those fundamental aspects for both the certification
authority as well as the entity to which a digital certificate has
been issued.

so if somebody was designing a public key digital certificate from
scratch, that was supposed to certify something and represent some
level of assurance ... they would include at least the
assurance/integrity levels for the above four characteristics for both
the certification authority as well as the entity for which the
digital certificate has been issued.

oops, i forgot, there is something of a chicken & egg situation. the
4th item is a real-time characteristic ... and it seems frequently
that the common PKI impression is that you can't do a digital
signature until you have a digital certificate ... but if you have a
digital certificate before you do a digital signature ... it is
difficult for the digital certificate to be able to indicate the
environment that all future digital signatures happen in.

would have certified security modules and certified modes of
operation. after the authenticating entity digitally signs the
transaction, the terminal then would co-sign the operation (attesting
to the environment that the operations are occuring in).

with those basic, underlying fundamental assurance issues delt with,
then it would be possible to add assurance/integrity information
related to the information that is supposedly being certified (and for
which certificate/license/credential/diploma documents are suppose to
be a representation of ... unless you are in transition to online,
realtime system ... like driver's license and other importion
operation).

so i would start by asserting that any "high assurance" digital
certificate would begin by giving assurance/integrity levels for the
environment that keygen occured in and the assurance/integrity level
used for private key protection (for the entity that the digital
certificate is being issued to).

if the certification authority is running any software, they might
throw in things like the certified process that their software was
developed in i.e. has the certification authority's software been
developed with trusted software practices ... one of my favorite
web pages
http://www.software.org/quagmire/

say the right-hand section of above ... 2167A, 498, 12207.

of course, these assurance issues are somewhat orthogonal to whether
any of the other information being certified provides any incremental
trust for the eventual relying parties (in the common e-commerce
scenario, the consumer).

Ed Gould wrote:
> I think the 2314 was the original suggestion by IBM.

2314 (29 mbytes) and 2301 (paging drum, 4mbytes) were contemporaries on
360/67 (early 360/67s tended to have 2311 7mbytes before 2314s were
available). there were two drum models, 2303 and 2301. the 2303
read/wrote single head. the 2301 was nearly identical but read/wrote
four heads in parallel (for four times the data transfer rate).

the above gives both nominal access/sec/drive as well as normalizing the
access rate by capacity ... giving access/sec/mbyte ... i.e. a 3380
limited to just 40mbyte gives approx. the same access/sec/mbyte as
fully loaded 2314.

370s (especially after virtual memory was announced) tended to have
3330-1 (100mbytes) and 2305 (12mbytes, fixed head disk). there were
actually two 2305 models; one with 12mbytes and one with 6mbytes. the
6mbyte model had same number of heads as 12mbyte model but "ganged" and
offset 180degrees (and 1/2 the tracks). it only read/wrote a single head
... but chose the first head that encountered the desired record (and
therefor had 1/2 the avg. rotational delay).

around 1980, a lot of internal sites got "1655". these were emulated
2305s built by a memory chip vendor that used memory chips that had
failed some standard memory chip test. There were enuf usable bits in
the chip that the "1655" controller circuitry was able to compensate for
various failures (at least in i/o transfer operations).

the 3350 had a couple cylinders that could have a fixed-head option. in
the late 70s, i was trying to get a feature added that would provide a
separate "exposure" address to any fixed-head region (to avoid having
fixed-head i/o blocked by 3350 arm motion i/o). this ran into politics
with a POK group that was planning something called vulcan ... basically
something similar to (later) 1655 ... or imagine something like 3090
expanded store but with i/o semantics. they felt that such an enhanced
3350 fixed-head option for paging would compete with vulcan (vulcan was
canceled before announce).

to implement 370 virtual machines (which were similar to 360/67 but
differed in the virtual memory hardware architecture). the basic csc/vm
production cp67 (running on cambridge's 360/67) was "cp67l". the joint
endicott/cambridge project modifications to provide 370 virtual machines
was "cp67h".

the cp67h system rarely ran on the real hardware. the issue was that the
cambridge machine also provided various kinds of time-sharing system to
people at educational institutions in the boston area (had mit, bu,
harvard, etc students). 370 virtual memory hadn't been announced and
there was a real security issue if the cp67h system ran on the real
hardware, 370 virtual memory details might leak. as a result cp67h
tended to only run in a 360/67 virtual machine under cp67l production
system.

once cp67h was operational, then there were modifications to create a
cp67 that ran on 370 virtual memory called "cp67i". a year before the
first 145 engineering model with virtual memory hardware, cp67i was
regularly running in 370 virtual machines under cp67h (which was running
in 360/67 virtual machines under cp67l).

there is old story about when the endicott engineers felt that they
finally had 145 virtual memory hardware ready to test and there was a
trip to endicott with a copy of cp67i. the first boot failed. after a
little diagnoses, it turned out that the engineers had reversed two of
the new "B2xx" op-codes. the kernel was quickly patched to correspond
to the (incorrect) hardware and the system successfully booted.

after that there was period when most of the internal 370s (with virtual
memory support) ran with cp67i (both before and after 370 virtual memory
was announced). early on, there were three disk engineers that came out
from san jose and added the support for 3330 and 2305 ... including
supporting RPS (lots of early 145s had been running with 2314s). cp67i
with 3330 & 2305 support was frequently referred to as cp67sj.

the early VS2 prototype started out with a version of MVT with virtual
memory software support hacked onto the side incorporating a cribbed
version of CP67's CCWTRANS to do the virtual->real channel program
translation.
http://www.garlic.com/~lynn/2006.html#31 Is VIO mandatory

adding virtual memory hardware to 370/165 was holding up the whole 370
virtual memory announcement. there was a resolution meeting where the
165 engineers proposed dropping some of the 370 virtual memory features,
which would cut six months off their effort. when this was accepted, all
existing implementations on other models needed the dropped features
deleted.

Edward E. Jaffe wrote:
Getting back to basics: What Binyamin has asked for is the ability to
have the AX for one TCB/RB in an address space be different than the AX
of another TCB/RB in the same address space. On the surface, that seems
like a reasonable "wish list" item to me. We've established that this
function doesn't exist given the current cross-memory architecture,
developed decades ago -- even before XA/370, but that doesn't invalidate
the merits of the idea. Does it?

"Gordon Wolfe, Ph.D." writes:
For those who are interested, my position will be opening up.
Boeing will actually have 2 positions in VM Systems opening, as soon
as we fill the position of Delivery Systems Manager. My last two
managers quit within three months of each other. Maybe it's my
deodorant? I don't know when the position(s) will open up. I doubt
that hiring will be an immediate priority for the new manager. But
go to http://www.boeing.com, at the top select 'employment', and at
the right select 'Job Search'. This is 'Information Technology' or
possibly 'Computing Delivery Systems', 'Salaried Non-Management',
state of Washington, 'Experienced'. The job title is 'System Design
Integration Specialist', which might be abbreviated 'Syst Desn Intgr
Spec'. Search keywords 'VM' or 'ESS' or 'LINUX', and maybe on the
job classification BCBDP4. This might be filled at a P3 or even a
P2, so check those too. I'd check once a week. These reg's
don't stay open very long when they do open up. The last time we
opened a req, we got 88 applications, interviewed 6 and hired none
of them. The position is still open.

had come out to the university the last week in jan. '68 to install
cp/67. I did extensive modifications on cp/67 (as well as os/360)
during that spring and summer and got to make a presentation on a lot
of the os/360 and cp/67 work at the fall '68 SHARE meeting in Atlantic City.

BCS was formed around dec(?). '68. BCS was to take over most of the
data processing at boeing and had made some decision to do some amount
of work with virtual machines (i think part of the idea was that BCS
could sell services not only inside boeing but also to other
customers). IBM and Boeing talked me into teaching a one week (40hr)
cp/67 class at the univ. to the (then small) BCS technical
staff.during spring break. BCS was still starting up and the core was
located at BGS dataprocessing corporate hdqtrs across from Boeing
field. They had a 360/30 for doing Boeing payroll. A "simplex"
(uniprocessor) 360/67 was brought into the same machine room for
cp/67.

I was hired as a full-time BCS employee (even tho I was still an
undergraduate) for that summer (69) at some level that gave me a badge
allowing me to park in the management parking lot next to the
bldg. (as opposed to the large parking lot behind it where most of the
assembly workers parked). BCS hdqtrs people were still in discussions
about absorbing the other datacenters (like the big one down in
Renton).

Everett was still starting up and "003" 747 was flying FAA
certification test in the skies around Seattle that summer. When I
first arrived, I was taken on tour of a number of facilities. One that
I still remember was a mockup of a 747 interior and being told that
the 747 were so large and carried so many people that they would never
be served with fewer than four jetways (for passenger
loading/unloading). What is the percentage of time that you've seen a
wide-body with even two jetways (instead of just one)?

For the summer, I was renting the basement from one of the engineers
working on 747. They had remodeled their house and the basement was
now its own apartment. He had stories about some sobotage that was
occuring that summer on the 747 project in Everett.

BCS also managed to latch onto a two-processor 360/67 multiprocessor
that was down at Boeing Huntsville and have it moved to Seattle. It
had been running a highly modified version of os/mvt release 13
supporting a lot of 2250 graphics work. The 2250 graphics work was
long running jobs and MVT had significant memory fragmentation
problems with long running jobs. Basically it was running 360/65 SMP
code with a small modification to turn on the 67s virtual memory.
There was no paging going on ... the virtual memory support was purely
being used to re-organize memory allocation into contiguous locations
(workaround for the underlying memory fragmentation).

that summer at boeing one of the things I managed to do was rework
some of the internal CP67 kernel linkage. I had previously done a lot
of specific pathlength rewrites and fastpath work ... some of which
were mentioned in the 68 Atlantic City share presentation. the original cp67
had all linkage using svc8 for calls and svc12 for returns. I had
analyzed a number of high useage, short pathlength subroutines that
could benefit from BALR linkages (i.e. the svc call/return pathlength
was frequently as long as the subroutine pathlength). I modified
call/return macro to do the BALR linkage convention (instead of SVC)
for the selectively modified routines.

Also that summer, I had done the first implementation for pageable kernel
structure (again with a hack in the call/return for the selective
routines). I had started with specific (lower useage)
functions/features in the console module ... which was a single large
routine. One of the things that I first needed to do was break it into
multiple 4k sized chunks for the portions that were going to be
paged. Breaking "console" into multiple 4k chunks created a lot of
additional external entries which then caused a problem with the
stand-alone loader used for cp67 system generation. The cp67 loader
had started out life as standard "BPS" card loader with minor
modifications. It had a limit of 255 external entries. The breakup of
console pushed the number of entries in the cp67 kernel over 255,
which in turn broke the loader. While every other piece of cp67 was
shipped as source code, there was no source code for the laoder. It
was a real pain in the rear to work around this problem.

balr leakage stuff eventually shipped in cp67 ... but the pageable
kernel stuff didn't show up in the product until vm370.

IBM 610 workstation computer

David Scheidt writes:
There are a fair number of installations where the computer doesn't
change much. Point-of-Sale systems don't change very fast, for
instance. Neither do telco switches. Or lots of other things where
the general-purpose computing ability of the computer isn't the main
goal of the system. Sure, you can use the 3B20 in a 5ESS to do
things other than control the switching modules, but why? Many of
these things have installed lifespans of decades.

I've referred before to issue in the mid to late 70s about internal
availability of 327x terminal requiring vp sign-off approval for each
terminal. we did a business analysis that the 3year amortized costs of
327x terminal was about the same as business phone on desks ... which
was taken as matter of course ... and didn't require vp approval for
each phone.

note however, many 327x terminals eventually saw lifetime service of a
decade or more (i had 3277 terminal for nearly 20 years).

for tax purposes ... lots of stuff use to have 7-10 year (or more)
depreciation. as the rate of technology change increased ... you
started to see things like 3year write-offs.

the rate of change sort of leads to recent (slightly related
automobile) topic drift about recent press conference mentioning
trying to reduce concept to show room time down to 3yrs (this is not
quite the same as 1990 auto industry effort trying to reduce concept
to show room time from 7 years down to 18-36months to put in on level
with foreign competition). recent post:
http://www.garlic.com/~lynn/2006.html#23 auto industry

POS terminals are now under lots of pressure. a couple months ago, i
stayed at an inn where the manager complained that he had recently
been forced to pay over $500 for a new terminal ... which didn't even
have support for any of the new chip stuff ... and the new chip stuff
will require another new terminal in a year or so.

HMerritt@ibm-main.lst (Hal Merritt) writes:
One just *has* to wonder if the outsourcing was mostly a ploy to
deal with an out of control culture. You transfer all management to
a third party. Let them 'retrain' the troops. Then you bring it back
under new management with new marching orders.

frequently the executive experience for managing IT organization is
totally different than executive experience required for operating the
rest of the business.

this can lead to out of control IT organization. this can also lead to a
very political (and potentially unprofessional) relationship between the
rest of the organization and the IT organization (i.e. lack of binding
legal contracts, inadequate service level agreement contracts, etc).

IT services can also significantly suffer ... if the IT organization is
viewed as purely a cost center ... and people making budget decisions
have little concept about adequate needs for providing quality (and
possibly even necessary) IT service ... vis-a-vis budget allocation for
the rest of the organization.

to some extent this has also accounted for some of the datacenter to
desktop transitions; you turned everybody into their own system
administrator ... initially eliminating dedicated head count in the (IT)
cost center (obviously it cost less if everybody was doing it for
themselves).

another approache for dealing with some of the issues has been to turn
IT into independent company (moving from an internal cost center to a
profit center) .... recent posting regarding the early days of BCS
(boeing computer services):
http://www.garlic.com/~lynn/2006.html#40 All Good Things

outsourcing works fairly well during periods of long term stability.
however, in periods of great change ... the ability of an organization
to quickly adapt to changing conditions (agility) can be inhibited by
arms' length legal agreements. Unanticipated and/or rapid changes in
IT services are probably not provided for in the outsourcing contract.
Agility and rapid adaptation frequently also involve some amount of
experimentation aka trying new stuff for the first time; unless you
can perfectly predict the future, not all experimentation regarding
unknowns will be successful.

and here is a recent OODA-loop reference somebody just
forwarded to me
http://www.affordablehousinginstitute.org/blogs/us/2005/10/doing_the_gover_1.html

boyd's briefs had numerous examples from military history about
agility being deciding factor in conflicts ... along with some tying
agility to modern business conflicts

C4 was a 1990 project in the automobile industry that was almost all
IT-related with an attempt to get time from concept to showroom down
from seven years to 18-36 months (putting it on level with foreign
competition). recent posting mentioning auto industry agility
http://www.garlic.com/~lynn/2006.html#23 auto industry
another minor reference
http://www.garlic.com/~lynn/2006.html#42 IBM 610 workstation computers

greymaus writes:
Some groups here got into big trouble leasing equipment that outdated
more quickly that their write-off periods. (Specially with M$ bringing
out 'new' OS every 5 years or so).. The great bulk of equipment here,
Ireland, is leased, and this can have serious consequences if mistakes
are made by the accountants..

Re: faster introduction of new models by foreigners, is this because
of better integration of the production process?.. or are they just
more committed ?

possibly at least both? ... in some cases they were reducing elapsed
time from concept to showroom to as little as 18 months.

in approx. the same time frame, similar foreign companies were
reducing elapsed time between consumer electronics models to 90
days. this was introducing some interesting distribution issues since
leaving manufacturing, getting into containers, onto ships, into US
ports and out to distribution centers might represent significant part
of that time (and tracking discontinued models and introduction of new
models became issues).

the acceleration in the speed of change ... also starts to impact
things like long standing, traditional business processes gated by
annual fall budget planning. things are smoother if change isn't
happening faster than the annual planning cycle. however, all bets are
off when change starts happening faster than the annual planning
cycle.

with respect to the outsourcing posting also mentioning other aspects,
the c4 auto scenario involved some issues that were somewhat the
revserse ... a large auto company had purchased a large system
integrator/outsourcing company and there were public statements about
turning over all internal IT operations to that organization.
http://www.garlic.com/~lynn/2006.html#43 Sprint backs out of IBM outsourcing deal

in the mid-80s, i had an issue with paying nearly $20k for some
computer specific hardware with technology that wasn't as good as I
could get from $300 cdrom drive (and was starting to predict that
consumer electronics was going to become common in even fairly
high-end computer equipment).

THE GOOD
========
Moving again, and it's time to face the reality that I'm never going to
get around to this project. Thus, I am giving away free to a good home
(you pick up in Portland, OR, or pay actual cost to ship) one IBM RT,
"an early workstation from IBM, based on the IBM '032' or ROMP CPU",
"also known as the model 6150, 6151, and 6152." If you don't know what
this is, you probably don't want it. It was basically a precursor to the
RS/6000, manufactured circa 1986. It will run AIX UNIX, or AOS (a 4.3BSD
derivative). I have tons of parts and extra cards for this beast. Extra
CPUs, a number of ESDI disk drives, a monitor, a very cool old-school
IBM two-button mouse, a tape drive, etc. It's all there. I do not have
part numbers or pictures of this stuff, but if there is any interest I
will be glad to photograph everything.

does it have a megapel display? i had gotten an rt with megapel
display into a booth at '88 interop ... (kitty-corner mext to case's
snmp ... and case was talked into getting snmp up and running on the
machine for the show. old interop '88 postings
http://www.garlic.com/~lynn/subnetwork.html#interop

1. [from Virtual Address eXtension] The most successful minicomputer
design in industry history, possibly excepting its immediate
ancestor, the PDP-11. Between its release in 1978 and its eclipse
by killer micros after about 1986, the VAX was probably the hacker's
favorite machine of them all, esp. after the 1982 release of 4.2
BSD Unix (see BSD). Esp. noted for its large,
assembler-programmer-friendly instruction set -- an asset that
became a liability after the RISC revolution. 2. A major brand of
vacuum cleaner in Britain. Cited here because its sales pitch,
"Nothing sucks like a VAX!" became a sort of battle-cry of RISC
partisans. It is even sometimes claimed that DEC actually entered a
cross-licensing deal with the vacuum-Vax people that allowed them to
market VAX computers in the U.K. in return for not challenging the
vacuum cleaner trademark in the U.S.

the 4341 was in the same market segment (780, not micro-vax), about
the same performance, shipped more, better price/performance,
etc. ... reference to somebody commenting that 11k of the vax
shipments should have been 4341s.
http://www.garlic.com/~lynn/2001m.html#15 departmental servers

"Tim Shoppa" writes:
The Intel PL/M compiler that generates i8008 code is out there on the
net. It used to be on the oak.oakland.edu ftp site in the CP/M
directory, but while that archive is gone I'm convinced that it must be
still out there.

I think Gary Kildall was probably the actual author of that compiler.
The compiler itself is in somewhat portable Fortran IV. (Hollerith
strings IIRC, making those portable is actually a lot of work). It
compiled and worked out-of-the-box on VAX Fortran and also I think the
PDP-11 Fortrans, and while those were nominally Fotran 77 compilers
they also did a very good job supporting dusty-deck Fortran code.

Modern Fortran compilers (including the GNU toolchain and f77 on most
commercial unix platforms) are not nearly so good about supporting
dusty-deck code.

The 8008 is not exactly your ideal target for a compiler. Stack and
math operations exist but not in the same way you'd expect on a modern
target (or even say a PDP-11 target). But they made it work quite well
(PL/M is actually a pretty well-targeted language for typical embedded
microcontroller stuff, unlike say C).

Much of CP/M was written in PL/M (ignore those disassemblies which make
it appear as if it were authored in assembler) and if IIUC the very
earliest versions of CP/M actually ran on 8008's.