Morten Reistad <first@last.name> writes:
This is a welcome development in the "aid industry". They are
so used to public monies where accountability for actual results
is abysmal. This is a field where mr Gates can contribute a lot
just by being his normal self. I wish him all the best.

Morten Reistad <first@last.name> writes:
No, not "will have computer chips"; "have had computer chips for a
year or two". They are called RFID tags. They are tags that can be
read at 2-200 ft range (depending on sophistication of equipment).

there is contactless/proximity ... like the wash. dc metro (cubic)
... or the HK octopus card (iso 14443, sony chip, distributed by
mitsubishi, erg). these typically have the transit gate reading the
contents, decrypting the contents (with derived symmetric key),
updating the contents, encrypting the contents, and writing back the
updated encrypted contents.
http://www.smartcardalliance.org/newsletter/september_04/feature_0904.html

these can be purely memory with no on-chip intelligence or processing.

i had a weird experience with a wash dc metro card a couple years ago
... where I had left a metro station with something like (positive)
$10 still on the card ... and the next time I tried to use the card,
the reader claimed there was a negative $5 balance (while outside the
transit system, card had lost $15 and actually $5 negative w/o being
used)

a lot of RFID started out being next generation barcode; just read the
number ... a lot more digits allowing unique chip identification down
to individual item level (rather than just vendor and product) and
being able to inventory w/o having to manually count each individual
item. big driver recently has been walmart mandating them from
suppliers. they would like to get these chips/technology into the
penny range (or even less; along with new & less expensive methods
of producing RFID signal w/o necessarily being traditional chip
fabrication process)

with (pure barcode) RFID technology becoming more prevalent, there is
other applications trying to leverage it.

Brian Inglis <Brian.Inglis@SystematicSW.Invalid> writes:
I don't think that applies: modern desktop CPUs have all the features
of high end mainframes and some supercomputer features; some
instructions execute in 0.25ns, they support GB memory so need 3 level
cache, Gb/s networking and I/O; hardware multi-threading because they
can't keep the CPUs busy; and multi-CPUs per die because faster clocks
aren't giving enough thruput increase.

the other issue with faster clocks is that latency across the chip is
becoming significant. with multiple CPUs on the same chip ... you can
reduce the distance (and therefor time) that a synchronous signal has
to travel.

also as the chip sizes remained somewhat the same ... while the
circuit sizes shrank ... you also had significantly more circuits per
chip. you could use the additional circuits for multiple cores ... but
you could also use the circuits for on-chip caches. you could have
dedicated on-chip "L1" caches per cpu core ... and shared on-chip "L2"
caches for all cpu cores on the same chip. That means that any
off-chip cache becomes "L3".

the modern out-of-order execution is at least equivalent of anything
that 370/195 (supercomputer) had ... and there is also branch
prediction, speculative execution (down predicted branch path) and
instruction nullification/abrogation (when prediction is wrong)
... which 370/195 didn't have.

the out-of-order execution helps with latency compensation (i.e. when
one instruction is stalled on some fetch operation ... execution of
other instructions may proceed somewhat independently).
multi-threaded operation was also a form of latency compensation
... trying to keep the execution units filled with independent
work/instructions.

370/195 did allow concurrent execution of instructions in the pipeline
... but branches would drain/stall processing. i had gotten involved
in a program to add multi-threading to a 370/195, i.e. dual
instruction stream; registers and instructions in the pipeline having
one bit tag identifying which instruction stream they belong to (but
not otherwise increasing the hardware or executable units). however,
this project never shipped a product.

this was based on the peak thruput of 370/195 was around ten mips ...
but that required careful management of branches ... most codes ran at
five mips (because of the frequent branches that drained the
pipeline). dual i-streams (running at five mips per) had a chance of
keeping the "ten mip" peak executing units busy.

Trying to design low level hard disk manipulation program

Bill Todd <billtodd@metrocast.net> writes:
Many industrial-strength file systems (including the major Unix
variants, NTFS, VMS's ODS-2/5...) have a level of indirection between
a file's directory entry and the file's data that FAT lacks: the
on-disk inode in Unix, the MFT entry in NTFS, the index file entry in
ODS2/5). Thus at each stage of a path look-up (directory ->
subdirectory > subdirectory... -> file), there's an extra disk access
(unless the inode-like structure happens to be cached) before you can
get to the data at the next level.

MFD from mid-60s cms filesystem ... which had some number of
incremental improvements and then saw the enhanced/extended (EDF)
filesystem in the mid-70s.

the original mid-60s implementation supported sparse files ... so
there were various null pointers for indirect hyperblocks and
datablocks that didn't actually exist.

one of the unofficial early 70s incremental improvements to the cms
filesystem was the directory file block pointer would point directly
at the data block ... instead of an indirect hyperblock ... for file
that had only one data block (for small files, instead of having a
minimum of two blocks, one indirect hyperblock and one data block, it
would just have the single data block). another unofficial early/mid
70s incremental improvements was various kinds of data compression. I
think both of these were originally done by perkin-elmer and made
available on the share waterloo tape. there was some performance
measurement for the p/e compression changes ... that the filesystem
overhead to compress/decompress the data in the file was frequently
more than offset by reduction in cpu overhead reading/writing the
physical blocks to/from disk.

one of the things that the mid-70s EDF extensions brought to the cms
filesystem was multiple logical block size (1k, 2k, & 4k) and more
levels of indirect hyperblocks ... supporting up to five levels of
indirection for large files ... i.e. a 4k filesystem with single
hyperblock supported up to 1024 four byte data block pointers. a two
level hyperblock had the first level pointing to up to 1024
first-level hyperblocks which each then would point to up to 1024 4k
data blocks. As a file grew, the filesystem could transition to higher
levels of hyperblock indirection.

in the early 70s, i had did a page-mapped layer for the original
(cp67) cms filesystem ... and then later upgraded the EDF filesystem
(by that time morphed into vm370) to also support page-mapped layer
construct
http://www.garlic.com/~lynn/submain.html#mmap

there is some folklore that various pieces of ibm/pc and os2
filesystem characteristics were taken from cms. note also that both
unix and cms trace some common heritage back to ctss.

krw <krw@att.bizzzz> writes:
Yeah, right. That must be why IBM's internal network was bigger
than the ARPAnet until '85 or so (queue LynnW).

:)

part of the issue was that the official/strategic communication
product was SNA ... which had effectively large master/slave paradigm
in support of mainframe controlling tens of thousands of (dumb)
terminals (there were jokes about sna not being a system, not being a
network, and not being an architecture).

in the very early sna days, my wife had co-authored a (competitive)
peer-to-peer architecture (AWP39). she then went on to do a stint in
POK responsible for loosely-coupled architecture (aka mainframe
cluster) where she created peer-coupled shared data architecture
... except for IMS hot-standby, didn't see a lot of uptake until
parallel sysplex
http://www.garlic.com/~lynn/submain.html#shareddata

there were some number of battles between the communication group
attempting to enforce the "strategic" communication solution for all
environments (even as things started to move away from the traditional
tens of thousands of dumb terminals controlled by a single mainframe).
san jose research had a eight-way 4341 cluster project using
trotter/3088 (effectively eight channel processor-to-processor switch)
that they wanted to release. in the research version using non-sna
protocol ... to do a full cluster synchronization function took
something under a second elapsed time. they were forced to migrate to
sna (vtam) based implementation which inflated the elapsed time to
over half a minute. recent reference to early days of the project
http://www.garlic.com/~lynn/2006p.html#39 "25th Anniversary of the Personal Computer"

another situation was that terminal emulation contributed to early
heavy uptake of PCs in the business environment. you could get a PC
with dumb terminal emulation AND some local computing capability in a
single desktop footprint and for about the same price as a 327x
terminal that it would replace. later as PC programming became more
sophisticated, there were numerous efforts to significantly improve
the protocol paradigm between the desktop and the glasshouse. however,
all of these bypassed the communication sna infrastructure and
installed terminal controller product base.
http://www.garlic.com/~lynn/subnetwork.html#emulation

the limitations of terminal emulation later contributed heavily to
data from the glasshouse being copied out to local harddisks (either
on local servers or on the desktop itself). this continued leakage was
the basis of some significant infighting between the disk product
group and the communication product group. the disk product group had
come up with a number of products that went a long way to correcting
the terminal emulation limitations ... but the communicaton product
group continually blocked their introduction (claiming that they had
strategic product responsibility for anything that crossed the
boundary between the glasshouse and the external world).

at one point (in the late 80s) a senior person from the disk product
group got a talk accepted at the communication product group's
worldwide, annual internal conference. his opening deviated from what
was listed for the talk by starting out stating that the head of the
communication product group was going to be responsible for the demise
of the (mainframe) disk product group. somewhat unrelated topic drift,
misc. collected posts mentioning work with blg14 (disk engineering)
and blg15 (disk product test)
http://www.garlic.com/~lynn/subtopic.html#disk

however, this was also during the period that the communication
product group was attempting to stem the tide away from terminal
emulation with SAA (and we would take some amount of heat from the SAA
forces).

for other drift ... a side effort for hsdt in the mid-80s ... was
attempting to take some technology that had originally been developed
at one of the baby bells and ship it as an official product. this had
a lot of SNA emulation stuff at boundaries talking to mainframes. SNA
had evolved something called cross-domain ... where a mainframe that
didn't directly control a specific terminal ... still could interact
with a terminal ("owned" by some other mainframe). the technology
would tell all the boundary mainframes that (all) the terminals were
owned some other mainframe. In actuallity, the internal infrastructure
implemented a highly redundant peer-to-peer infrastructure ... and
then just regressed to SNA emulation talking to boundary mainframes.
http://www.garlic.com/~lynn/99.html#66 System/1 ?
http://www.garlic.com/~lynn/99.html#67 System/1 ?
http://www.garlic.com/~lynn/99.html#70 Series/1 as NCP (was: Re: System/1 ?)

Was FORTRAN buggy?

Brian Inglis <Brian.Inglis@SystematicSW.Invalid> writes:
IBM VM systems never had problems talking to each other, other IBM
systems, the ARPAnet, or Internet. AFAIK DEC systems only supported
RJE not NJE, and unlikely ever supported NJE over SNA. NJE was the
JES-JES (JES2/HASP, JES3/ASP) internode protocol for file transfer
which could cause system crashes if either end was improperly
configured.

it was worse than that ... NJE grew up out of HASP networking, some
amount of it had been done at TUCC. HASP had a one byte index for
table of 255 psuedo (spooled) devices that it implemented local
spooling. the original networking support scavenged unused entries
from that table to define networking nodes. a typical HASP node might
have 60-80 psuedo devices defined ... leaving a maximum of 170-190
entries for defining networking nodes. hasp/jes also would trash any
traffic where either the originating node or the destination node
wasn't defined in the local table. the internal network fairly quickly
exceeded 255 nodes
http://www.garlic.com/~lynn/subnetwork.html#internalnet

limiting hasp/jes to anything other than a boundary node (pretty
useless as an intermediate node that would trash some percent of
traffic flowing through). at some point, NJE increased maximum network
size to 999 ... but that was after the internal network was over 1000
nodes (again creating network operational problems if JES was used for
other than purely boundary nodes).

the other problem was that NJE protocol confused the header fields
... intermingling networking stuff with purely local stuff. not only
would misconfigured hasp/jes systems crash other hasp/jes systems
... but it was possible for two different systems (properly
configured) at slightly different release levels (with slightly
different header formats) to crash each other. there was an infamous
scenario where a system in san jose was causing systems in hursley to
crash.

as a result, there was a body of technology that grew up in VM
networking nodes for simulating NJE. there were a whole library of NJE
drivers for various versions and releases of hasp/jes. A VM simulated
NJE driver would be started for the specific boundary hasp/JES that it
was talking to.

incoming traffic from a boundary NJE node would be taken and
effectively translated into a generalized connonical format. outgoing
traffic to boundary NJE node would have header formated for the
specific hasp/jes release/version. all of this was countermeasure to
keep the wide variety of different hasp/jes systems around the world
from crashing each other.

another characteristic was that the native VM drivers tended to have
much higher thruput and efficiency than the NJE protocol. however, at
some point (possibly for strategic corporate compatibility purposes)
they stopped shipping the native VM drivers ... and only shipped NJE
drivers for VM networking.

bitnet was US educational network using the vm networking technology
(however, as mentioned, eventually only NJE drivers were shipping in
the vm product). while the internal network and bitnet used similar
technologies ... the sizes of the respective networks were totally
independent.

at the time, the standard mainframe tcp/ip driver supported about
44kbytes/sec aggregate thruput using burning approx. a full 3090
processor. in some rfc 1044 tuning testing at cray research, was
seeing 1mbyte/sec sustained thrutput between a cray and a 4341-clone
... using only a modest amount the 4341-clone process (nearly two
order magnitude improvement in bytes per cpu second).

also for the original NSFNET backbone RFP (effectively the operational
networking precusor to the modern internet), we weren't allowed to
bid. However, my wife went to the director of NSF and got a technical
audit of what we were running. one of the conclusions was effectively
that what we already had running was at least five years ahead of all
bid submissions (to build something new).

one of the rex historical references:
http://www.computinghistorymuseum.org/ieee/af_forum/read.cfm?forum=10&id=21&thread=7

from above:
By far the most important influence on the development of Rexx was the
availability of the IBM electronic network, called VNET. In 1979, more
than three hundred of IBM's mainframe computers, mostly running the
Virtual Machine/370 (VM) operating system, were linked by VNET. This
store-and-forward network allowed very rapid exchange of messages
(chat) and e-mail, and reliable distribution of software. It made it
possible to design, develop, and distribute Rexx and its first
implementation from one country (the UK) even though most of its users
were five to eight time zones distant, in the USA.

repeat from the above post ... in mid-1980, arpanet was hoping to have
100 nodes by 1983 (the year that the internal network hit the
1000th node mark):

ARPANET newsletter
ftp://ftp.rfc-editor.org/in-notes/museum/ARPANET_News.mail
from above:
NEWS-1 DCA Code 531
1 July 1980 (DCACODE535@ISI)
(202) 692-6175
ARPANET NEWSLETTER
---------------------------------------------------------------------
Over the past eleven years, the ARPANET has grown considerably and has
become the major U. S. Government research and development
communications network. The ARPANET liaisons have made significant
contributions to the network's success. Your efforts are voluntary,
but are critical to successful operation of each Host, IMP, and TIP.
Your continued support of the ARPANET is greatly appreciated and will
facilitate continued smooth ARPANET operation.
To aid you in performance of your duties, DCA will attempt to provide
you with the latest information in network improvements. This
information is grouped into two major areas: management and technical
improvements. However, a brief discussion of where we are going with
the ARPANET is in order.
The ARPANET is still a rapidly growing network. It provides a service
which is both cost and operationally effective. We predict the
ARPANET will grow to approximately 100 nodes by 1983, when we
will begin transferring some of the subscribers to DOD's AUTODIN II
network.

gordonb.6hiy2@burditt.org (Gordon Burditt) writes:
OS/360 used a linked list of "save areas" containing saved registers,
return addresses, and if desired, local variables. (Now, granted,
when I was working with it, C didn't exist yet, or at least it
wasn't available outside Bell Labs.) Reentrant functions (required
in C unless the compiler could prove it wasn't necessary) would
allocate a new save area with GETMAIN and free it with FREEMAIN.
Non-reentrant functions would allocate a single static save area.

minor note ... the savearea allocation was the responsibility of
the calling program ... but the saving of registers were the
responsibility of the called program ... i.e. on program entry,
the instruction sequence was typically:

for more detailed discussion ... i've done a q&d conversion of the old
ios3270 green card to html ... and more detailed discussion of
call/save/return conventions can be found at:
http://www.garlic.com/~lynn/gcard.html#50

the called program only needed a new save area if it would, in turn
call some other program. non-reentrant programs (that called other
programs) could allocate a single static savearea. only when you had
reentrant programs that also called other programs ... was there an
issue regarding dynamic save area allocations.

the original cp67 kernel had a convention that was somewhat more like
a stack. it had a contiguous subpool of 100 save areas. all module
call/return linkages were via supervisor call. it was the
responsibility of the supervisor call routine to allocate/deallocate
savearea for the call.

including some people that had worked on ctss. multics was on the 5th
flr of 545 tech sq ... and also included some people that had worked
on ctss.

as i was doing various performance and scale-up work on cp67 ... i
made a number of changes to the cp67 calling conventions.

for some number of high-use non-reentrant routines (that didn't call
any other routines), i changed the calling sequence from supervisor
call to simple "branch and link register" ... and then used a static
area for saving registers. for some number of high-use common library
routines ... the supervisor call linkage scenario had higher
pathlength that the function called ... so the switch to BALR call
convention for these routings significantly improved performance.

the other problem found with increasing load ... was that it became
more and more frequent that the system would exhaust the pool of 100
kernel save areas (which caused it to abort). i redid the logic so that it
could dynamically increase and decrease the pool of save areas
... significantly reducing system failures under heavy load.

Anne & Lynn Wheeler <lynn@garlic.com> writes:
part of the issue was that the official/strategic communication
product was SNA ... which had effectively large master/slave
paradigm in support of mainframe controlling tens of thousands of
(dumb) terminals (there were jokes about sna not being a system, not
being a network, and not being an architecture).

from above:
The original United network environment consisted of approximately
20,000 dumb terminals connected to three separate networks: an
SNA-based network connecting into IBM mainframes for business
applications; a Unisys-based network whose processors did all of the
operational types of programs for the airline such as crew and flight
schedules and aircraft weights and balance; and the Apollo network,
which connected users to the airline's reservation system for all
passenger information, seat assignments, etc. That meant that for
every airport that United flew into, it had to have three separate
telephone circuits--one for each network. According to Ken Cieszynski,
United's senior engineer in Networking Services, it was a very costly,
cumbersome and labor-intensive system for operating and maintaining a
business.

... snip ...

my wife was in conflict with the SNA group from early on ... having
co-authored (competitive) AWP39 peer-to-peer networking architecture
during the early days of SNA, did battle with them when she was
in POK responsible for loosely-coupled (cluster mainframe) architecture,
http://www.garlic.com/~lynn/submain.html#shareddata

The original United network environment consisted of approximately
20,000 dumb terminals connected to three separate networks: an
SNA-based network connecting into IBM mainframes for business
applications; a Unisys-based network whose processors did all of the
operational types of programs for the airline such as crew and flight
schedules and aircraft weights and balance; and the Apollo network,
which connected users to the airline's reservation system for all
passenger information, seat assignments, etc. That meant that for
every airport that United flew into, it had to have three separate
telephone circuits--one for each network. According to Ken Cieszynski,
United's senior engineer in Networking Services, it was a very costly,
cumbersome and labor-intensive system for operating and maintaining a
business.

and
http://www.computerworld.com/managementtopics/outsourcing/story/0,10801,63472,00.html

from above:
IBM helped build the transaction processing facility (TPF) for
American Airlines Inc. in the late 1950s and early 1960s that would
become the Sabre global distribution system (GDS). IBM built a similar
TPF system for Chicago-based United Air Lines Inc. That system later
became the Apollo GDS.

... snip ...

galileo/apollo history
http://www.galileo.com/galileo/en-gb/about/History/

KR Williams <krw@att.bizzzz> writes:
You were blind. We were "widebanding" tape images for ICs and
circuit cards by the time I got there in the mid '70s. They were
called RITs (Release Interface Tapes), though they never existed as
mag tapes. Lynn talked about VNET before that. By the early '80s
conferencing (similar function as the USENET) appeared. The IBMPC
conferencing disk opened in '81, IIRC.

was shipping chip design off to LSM (losgatos state machine or the
logic simulation machine for publication, san jose bldg. 29) and EVE
(endicott validation engine, there was one in san jose bldg. 86, disk
engineering had been moved to offsite location while bldg. 14 was
getting its seismic retrofit) for logic verification. there was claim that
this helped contribute to bringing in the RIOS chipset (power) a year
early.

i got blamed for some of that early conferencing ... doing a lot of
the stuff semi-automated. there was even an article in datamation.
there were then some number of internal corporate task forces to
investigate the phenomena. hiltz and turoff (network nation,
addison-wesley, 1978) were brought in as consultants for at least one
of the task force investigations. then a consultant was paid to sit in
the back of my office for nine months, taking notes on how i
communicated ... also had access to all my incoming and outgoing email
as well as logs of all my instant messaging activity. besides an
internal research report, (with some sanitizing) it also turned into a
stanford phd thesis (joint between language and computer ai) ... some
number of past posts mentioning computer mediated conversation (and/or
the stanford phd thesis on how i communicate)
http://www.garlic.com/~lynn/subnetwork.html#cmc

the ibmvm conferencing "disk" opened first ... followed by the ibmpc
conferencing "disk". the facility (TOOLSRUN) was somewhat cross
between usenet and listserv (recipient could specify configuration
that worked either way). you could specify recipient options that
worked like listserv. however, you could also install a copy of
TOOLSRUN on your local machine ... and setup an environment that
operated more like usenet (with local respository).

Bill Todd <billtodd@metrocast.net> writes:
But it is indeed a gray area as soon as one introduces the idea of a
CopyFile() operation (that clearly needs to include network copying
to be of general use). The recent introduction of 'bundles'
('files' that are actually more like directories in terms of
containing a hierarchical multitude of parts - considerably richer
IIRC than IBM's old 'partitioned data sets') as a means of handling
multi-'fork' and/or attribute-enriched files in a manner that simple
file systems can at least store (though applications then need to
understand that form of storage to handle it effectively) may be
applicable here.

we had somewhat stumbled across file bundles (based on use, not
necessarily any filesystem structure organization) in the work that
started out doing traces of all record accesses for i/o cache
simulation (circa 1980).

the strict cache simulation work showed that partitioned caches (aka
"local LRU") was always lower performance than global cache (aka
global LRU). for a fixed amount of electronic storage, a single
global system i/o cache always had better thruput than partitioning
the same amount of electronic storage between i/o channels, disk
controllers, and/or individual disks (modulo a track cache for
rotational delay compensation).

further work on the full record access traces started to show up some
amount of repeated patterns that tended to access the same collection
of files. for this collection of data access patterns, rather than
disk arm motion with various kinds of distribution ... there was very
strong bursty locality. this led down the path of maintaining more
detailed information about files and their useage for optimizing
thruput (and layout).

we had done detailed page reference traces and cluster analysis in
support of semi-automated program reorganization ... which was
eventually released as VS/REPACK product. the disk record i/o traces
started down the path of doing something similar for filesystem
organization/optimization.

i had done a backup/archive system that was used internally at a
number of locations. this eventually morphed into product called
workstation datasave facility and then adsm. it was later renamed tsm
(tivoli storage manager). this now supports bundles/containers for
file storage management (i.e. collections of files that tend to have
bursty locality of reference patterns)
http://www.garlic.com/~lynn/submain.html#backup

some number of other backup/archive and/or (hierarchical) storage
management systems now also have similar constructs.

Was FORTRAN buggy?

vjp2.at writes:
Maybe the problem was the pretentious creeps got drawn to being
"safe" (F.U.D.) on IBM while the innovators were all on DEC.

i've commented before that there were more 4341s sold than vax'es into
the same mid-range market segment (besides the large numbers deployed
internally). there were numerous cases of customer orders in hundreds
at a time for departmental computing type operations. posting giving
example:
http://www.garlic.com/~lynn/2001m.html#15 departmental servers

the mid-range then got hit in the mid-80s as that market segment
started moving to workstations and larger PCs for servers and
departmental computing.

to some extent popular press seemed to focus on the high-end mainframe
iron doing commercial batch operations compared to some of the other
vendors offerings in the mid-range market segment (even tho boxes like
4341 and 4331 were also extremely popular in that midrange market in
the late 70s and early 80s).

hancock4 writes:
IBM announced the first disk drive 50 years ago. Modern computing
would not exist without the economical random access memory afforded by
the disk drive. Could you imagine loading a separate cassette tape
every time you wanted to run a program or access a file? All on-line
processing wouldn't exist since there'd be no way to locate and store
information in real time.

Apparently this anniversary is a yawner. The 40th Anniv of S/360 got
attention.

note that the san jose plant site, where all of this was done ... now
belongs to hitachi. there used to be all sorts of stuff on various ibm
san jose web sites about early activity ... but a lot of that seemed
to go missing when the location changed hands.

can you imagine holding big festivities on the plant site that no
longer belongs to you.

during the early 80s there was some amount of friendly competition
between san jose storage business and the pok large mainframe business
on which location was contributing the most to the bottom line (which
had traditional been pok, but there was a period where they were neck
& neck ... and even quarters were san jose passed pok).

Anne & Lynn Wheeler <lynn@garlic.com> writes:
note that the san jose plant site, where all of this was done ... now
belongs to hitachi. there used to be all sorts of stuff on various ibm
san jose web sites about early activity ... but a lot of that seemed
to go missing when the location changed hands.

can you imagine holding big festivities on the plant site that no
longer belongs to you.

you might find the marketing department from a line of business
possibly taking small part of their budget ... say several million to
drop on gala and press releases ... but since the original line of
business is sold off to somebody else ... it is hard to imagine who is
likely to drop even a couple million on such an activity.

how many remember the "last great dataprocessing IT party" (article in
usatoday)? ... ibm had taken the san jose coliseum ... brought in
jefferson starship and all sorts of other stuff (gala for the rsa
show). between the time the contracting/funding for the event and the
actual event ... the responsible executive got totally different
responsibilities ... but they allowed him to play the greeter (all
dressed up in a tux) at the front door as you went in.

1.In 1969, Continental Airlines was the first (insisted on being the
first) customer to install PARS. Rushed things a bit, or so I hear. On
February 29, 1972, ALL of the PARS systems cancelled certain
reservations automatically, but unintentionally. There were (and still
are) creatures called "coverage programmers" who deal with such
situations.

2.A bit of "cute" code I saw once operated on a year by loading a
byte of packed data into a register (using INSERT CHAR), then used LA
R,1(R) to bump the year. Got into a bit of trouble when the year 196A
followed 1969. I guess the problem is not everyone is aware of the odd
math in calendars. People even set up new religions when they discover
new calendars (sometimes).

3.We have an interesting calendar problem in Houston. The Shuttle
Orbiter carries a box called an MTU (Master Timing Unit). The MTU gives
yyyyddd for the date. That's ok, but it runs out to ddd=400 before it
rolls over. Mainly to keep the ongoing orbit calculations smooth. Our
simulator (hardware part) handles a date out to ddd=999. Our simulator
(software part) handles a date out to ddd=399. What we need to do, I
guess, is not ever have any 5-week long missions that start on New
Year's Eve. I wrote a requirements change once to try to straighten
this out, but chickened out when I started getting odd looks and
snickers (and enormous cost estimates).

Pfizer to Use RFID to Combat Fake Viagra
http://www.technewsworld.com/story/53218.html

from above ...
Pfizer claims it is the first pharmaceutical company with a program of
this type, focused on EPC authentication as a means of deterring
counterfeiting. However, Wal-Mart now requires its top 300 suppliers
to tag cases and pallets of select goods, and over 24 drug providers
tag bulk containers of Schedule II drugs, prescription painkillers and
drugs of abuse.

et472@FreeNet.Carleton.CA (Michael Black) writes:
Or, the hard drive would be invented, but later. I'm less certain
that would have impacted things that much. I got by without a hard
drive until the end of 1993, so while a hard drive likely made things
easier before that, they could be lived without.

you are thinking about your personal use ... but it wasn't originally
invented for personal use ... but for large dataprocessing commercial
operations. all the real-time, online transaction stuff starting in
the 60s were built on hard drives, electronic point-of-sale credit
cards, atm machines, online airline reservation systems, etc, ... the
lack of hard drive would have had enormous impact on large number of
online/realtime things that people were starting to take for granted.

scott@slp53.sl.home (Scott Lurndal) writes:
Most of the San Jose plant buildings have been torn down or are being
torn down as we speak to make room for 3000 homes and various shopping
and big-box stores. Replacing office and manufacturing space with
homes is pretty damn stupid when the office/manufacturing space is
in a counter-commute area. Those 3000 homes are really going to help
traffic suck on 85, 87 and 101.

also as per the earlier posts, bldg. 50 was part of the massive
manufacturing facility build-out done in the mid to late 80s ... part
of armonk's prediction that world-wide business was going to double
(from $60b/annum to $120b/annum). also as mentioned in the previous
posts, it probably was a career limiting move to take the opposite
position from corporate hdqtrs (that at least the hardware business
wasn't going to be doubling).

comment was specifically ... san jose "plant site" ... disk division
where they actually had manufacturing line ... recent reference to
plant site "new" manufacturing bldg. 50 ... also to site with photos
of the plant site from the air
http://www.garlic.com/~lynn/2006r.html#20 50th Anniversary of invention of disk drives

Was FORTRAN buggy?

KR Williams <krw@att.bizzzz> writes:
The only major projects I saw canceled in the '70s were *LOSERS*
(e.g. FS) and were replaced my products that made gazillion$ (303x
and 308x). Maybe IBM was behind DEC in its slide down the MBA
slope. IBM certainly got there, but in the late '80s, not '70s.

however, the person credited with leading the 3033 thru to its success
(3031 and 3032 were primarily repackaged 158s & 168s to use channel
director ... and even 3033 started out being 168 wiring diagram
remapped to newer chips) ... was then brought in as replacement
to head up disk division.

part of all this was that significant resources and time were diverted
into FS ... and after it was killed, there was a lot of making up
for lost time

we sort of got our hands slapped in the middle of pulling off 3033
success.

and after that was killed ... there was a 16-way smp project started
called "logical machines" ... that had 16 370 (158) engines all ganged
together with extremely limited memory/cache consistency. we had
diverted the attention of some of the processor engineers that were
dedicated to 3033 ... to spending a little time on "logical machine"
effort. when the person driving 3033 eventually found out that we were
meddling with some of his people ... there was some amount of attidude
readjustment (and suggestion that maybe certain people shouldn't be
seen in pok for awhile). during 3033, there were stories about him
being in admin office running pok during first shift and being down on
the line with the engineers second shift

hancock4 writes:
By the standards of the 1950s and 1960s, main memory was measured in
thousands while disk space was measured in millions. The first disk
drive had [only] 5 meg but that was enormous compared to main memory of
those days, maybe 80K. In a few years they got the disk up to 50 Meg.
I don't think the drums of that era could get anywhere near that.

360 had fixed head 2303 & 2301 drums (2301 effectively a 2303 but
read/write four heads in parallel) and 4mbytes capacity in the era of
2311 (7mbytes) and 2314 (29mbytes) disks.

in the early 70s with 370 came 3330-1 (100 mybytes) and then 3330-11
(200 mbytes) and the fixed-head disk 2305 (12mbytes) was replacement
for 2301/2303 drums.

when cp67 originally showed up at the univ. its disk i/o strategy was
strictly FIFO and paging operations were done with a different/unique
i/o operation per 4k page transfer.

one of the performance changes i did as an undergradudate at the
univ. was put in ordered arm seek queueing ... and where possible
whould (try and optimally) chain all queued page transfers into single
i/o (for the same device on drums and for same cylinder on disk).

the ordered arm seek queueing allowed at least 50percent better
thruput under nominal conditions and system degraded much more
gracefully under heavy load.

the single page transfer per physical i/o would peek around 80 page
transfer per second on 2301 drum (avg. rotational delay for each
page). with chaining, a 2301 would peak around 300 page transfers per
second.

later i did page mapped interface for the cms filesystem in which i
could do all sorts of fancy i/o optimizations (that was a lot more
difficult and/or not possible using the standard i/o interface
paradigm). post this year about some old performance stuff with
paged mapped interface
http://www.garlic.com/~lynn/2006.html#25 DCSS as SWAP disk for z/Linux

jsavard writes:
As it happens, the technique of "Just-in-Time compilation", recently
discovered, *is* a highly efficient way of emulating other
architectures entirely in software. And some Itanium chips were
claimed to execute x86 code with what was essentially an independent
chip of 486-style design on the die. I'm surprised at that: given
that the Itanium shares data types with the x86, it should have been
possible to have an Itanium control unit and an x86 control unit
share the same ALUs for more equal performance.

there have been various looks at doing 360/370 simulation on itanium
(porting existing i86 simualtors to itanium) going back to the
earliest days of itanium design.

in the late 70s, early 80s ... there was fort knox. the low-end 360 &
370 processors were typically implemented with "vertical" microcoded
processors ... that avg. out something like 10 micro-instructions per
360/370 instruction. the higher end 360/370 used horizontal microcode
engines (being somewhat more similar to itanium).

fort knox was to replace the vast array of microprocessor engines with
801s. this started out that the follow-on to 4341 was going to be an
801/risc engine. this was eventually killed ... i contributed to one
of the analysis that help kill it. part of the issue was that silicon
technology was getting to the point that you could start doing 370
almost completely in silicon.
http://www.garlic.com/~lynn/subtopic.html#801

one of the other efforts was 801/romp that was going to be used in the
opd displaywriter followon. when this was killed, it was retargeted as
a unix workstation and became pc/rt. this then spawned 801/rios
(power) and then somerset and power/pc.

there was also some work in fort knox on a hybrid 370 simulation
effort using 801 ... that involved some JIT activity. i got dragged
into a little of it because i had written a PLI program in the early
70s that processed 360/370 assembler listings ... analyzed what was
going the program and tried to generate a higher level representation
of the program ... a couple recent postings
http://www.garlic.com/~lynn/2006p.html#1 Greatest Software Ever Written?
http://www.garlic.com/~lynn/2006p.html#4 Greatest Software Ever Written?

Computer Artifacts

Steve O'Hara-Smith <steveo@eircom.net> writes:
There were various optical disc arrangements in use. Around 1990 I
saw a large read only store based on a jukebox like affair with (IIRC) 12in
optical WORM discs holding 2GB a piece. Eventually the CD standardised the
physical format of course.

76 sometime, in san jose there was a lathe like arrangement with
something like 200 spinning floppies. the spinning providing some
strength/structure to the floppies but also there was problem with
floppy material stretching with the constant spinning. single head was
on an assembly parallel to the rating "axle" ... it would possition
itself at the floppy it wanted to read/write, small blade parted the
floppies and then compressed air further parted the floppies providing
enuf room for the head to be inserted. the spinning provided enuf
structure for the head/floppy contact for read/write. single head for
all two hundred floppy "platters" is somewhat analogous to early disk
assemblies. i remember john cocke referring to it as some like "tail
dragger" (as a contrast to all the bleading edge stuff that was going
on).

The Anton design is a step further than the 4341MG2 implementation.
For a significant number of functions the Anton raises the circuitry
interface almost to the architected interface.

Note the circuitry interfaces across the 4300s are not identical. The
4331s are significantly different from the 4341s. Differences do
exist even within model groups. Some of this increased circuitry
expands existing components (a larger cache) and some of it functional
alters the circuitry to microcode interface.

On the 4300s, in fact on all S/370 compatible processors,
compatibility and portability are accomplished at the artificial,
architected machine interface (aka 370 architecture interface)

... snip ...

for other topic drift, originally 3090 was going to use an embedded
4331 as the service processor, running a highly modifed version
of vm370 release 6 and all the panels/menus done in ios3270. 3090 was
eventually shipped with a pair of embedded 4361s as dedicated service
processors (for redundancy and availability).

the static data in the chip represents supposedly unique information
as something you have authentication. copying/cloning the
information was sufficient to enable fraudulent transactions.

however, in the passport case, the "static data" in the chip
represents effectively biometric information (picture) about the
individual, requiring a further step of matching the data against the
person for something you are authentication. any copying/cloning of
the information doesn't directly enable fraudulent transactions (as in
the yes card scenario involving static data something you have
authentication). however, as mentioned in the referenced post, there
is other personal information which raises privacy issues.

for rfid/contractless, there is possibly increased ease of
copying/cloning of information compared to some other technologies
(analogous to using the internet can increase exposure of
information). however, there can be radically different treat models
associated with the information that is exposed.

Intel abandons USEnet news

"comp.arch@patten-glew.net" <AndyGlew@gmail.com> writes:
I started hearing about this 4 years ago from somebody at Microsoft.
It appears that some big brokerage company lost a lawsuit, because
customer lists which were stored on an incorrectly configured laptop
computer owned by an employe, were lost. Hypothesis is/was that, if
the computer lost had been company owned and configured, e.g. with hard
disk encryption, the company would have been deemed less negligent. My
Microsoft acquaintance at that time predicted the demise of "dial in
from your own personal computer" telecommuting.

and the swimming pool attractive nuisance scenario. there was civil
litigation claiming several billion around 30 years ago involving
industrial espionage and theft of trade secrets. the judge made
statements effectively that countermeasures & protection
have to be proportional to value (otherwise you can't really blame
people for doing what comes naturally and stealing).

hancock4 writes:
I don't think they were very popular. Drums were a "compromise"
between the high capacity of disk and the high speed of core. Since
drums had fixed heads over each track they were faster, indeed, the IBM
650 and other small machines of the 1950s used drums as the sole main
memory. I believe drums were invented by ERA (originally a secret
firm* but then part of Rem Rand Univac**) in the late 1940s and quite
popular in that era.

hancock4 writes:
I don't think they were very popular. Drums were a "compromise"
between the high capacity of disk and the high speed of core. Since
drums had fixed heads over each track they were faster, indeed, the IBM
650 and other small machines of the 1950s used drums as the sole main
memory. I believe drums were invented by ERA (originally a secret
firm* but then part of Rem Rand Univac**) in the late 1940s and quite
popular in that era.

there was another "compromise/trade-off" between disks and high speed
core for 360s. disks (drums, datacells, etc) were referred to as
"DASD" (direct access storage device) ... more specifically "CKD" DASD
(count-key-data).

the trade-off was extremely scarce real storage vis-a-vis realatively
abundant i/o resources. typically, filesystems have an index of where
things are on the disk. most systems these days, use the relatively
abundant real storage to cache these indexes (in addition to caching
the data itself). however of 360, the indexes were kept on disk
(saving real storage).

CKD allowed for essentially allowed filesystem metadata to be written
along with the data itself. the indexes were kept on disk with
filesystem metadata indexes. rather than reading the indexes into real
storage (and possibly caching them), CKD DASD i/o programming provided
for doing a sequential search of the indexes on disk ... trading off
scarce real storage for abundant i/o capacity.

however, by at least the mid-70s, the trade-off was reversing ... with
real storage starting to become abundant and disk i/o was becoming
more and more of a system bottleneck.

in the late 70s, i was brought in to investigate a severe
throughput/performance problem for a large national retail chain. they
had central dataprocessing facility providing support for all stores
nationally ... with several clustered mainframes sharing common
application library. it turns out that the CKD/PDS program library
dasd/disk search was taking approx. 1/2 second elapsed time (actual
program load took maybe 10-20 milliseconds ..., but the on-disk index
serial search was taking 500 milliseconds) and all retail store
software application program loads were serialized through this
process.

this trade-off left-over from the mid-60s included having the argument
for the on-disk serial search kept in processor real storage (further
optimizing real storage constraint) ... however it required that there
was a dedicated exclusive i/o path between the device and the
processor real storage for the duration of the search. this further
exasherbated the throughput. typically multiple disks (between 8 to
32) might share a common disk controller and i/o channel/bus. not only
was the disk performing the search, busy for the duration ... but
because of the requirement for the dedicated open channel between the
disk and processor storage (for accessing the search argument) busy
for the duration of the search ... it wasn't possible to perform any
operations for any of the other disks (sharing the same controller
and/or i/o channel/bus).

which primarily references working with the people in bldg. 14 (disk
engineering) and bldg. 15 (disk product test) on the san jose plant
site.

in any case, this and other factors prompted my observation that over
a period of ten to fifteen years, disk relative system performance had
declined by an order of magnitude i.e. other system resources
increased by a factor of fifty while disk resources (in terms of
operations per second) increased by possibly only a factor of five.

the initial take was that the disk division assigned their disk
performance and modeling group to refute my statements ... however,
after several weeks they came back and said that I may have actually
slightly understated the issue.

the change in the relative thruput of different system components ... especially
with respect to each other ... results in having to change in various
strategies and trade-offs ... which is also somewhat the recent thread
from comp.arch
http://www.garlic.com/~lynn/2006r.html#3 Trying to design low level hard disk manipulation program
http://www.garlic.com/~lynn/2006r.html#12 Trying to design low level hard disk manipulation program

and RDBMS. in the 70s, there were something of pro/con argument
between the people in santa teresa lab (bldg 90) dealing with 60s
"physical" databases and system/r work going on in bldg. 28. the stl
people were claiming that system/r indexes doubled the typical
physical disk space requirements and significantly increased the
search time to find a specific record (requiring potentially reading
multiple different indexes). this was compared to the 60s physical
databases were physical record pointers were exposed as part of the
data paradigm.

the counter argument was that there was significant manual and
administrative effort required to managed the exposed physical record
pointers ... that were eliminated in the RDBMS paradigm.

what you saw going into the 80s, was the significant increase in disk
space (the number of bits per disk arm increased by an order of
magnitude, the the disk arm accesses/sec only showed slight
improvement) and the significant decrease in the price per megabyte of
disk space ... somewhat made the issue mute about the size of the
RDBMS indexes. furthermore, the ever increasing abundance of real
storage made it possible to cache a significant portion of RDBMS index
in real storage (eliminating the significant number of additional I/Os
to process the index ... vis-a-vis the physical databases from the
60s).

"John Mashey" <old_systems_guy@yahoo.com> writes:
I say again: the hardware cost of this was really minimal, or we
wouldn't have done it. If an OS used it, there would be more
transitions. Nobody wanted the supervisor state code to access the
low-level resources.

of course os/vs2 started down the path when it went from svs (single
virtual storage) to mvs (multiple virtual storage).

the problem was that the whole infrastructure used a pointer passing
paradigm ... everything required that you access the caller's storage.

the move to mvs ... gave each application its own virtual address
space ... but with the MVS kernel appearing in 8mbytes of each one of
these application address space .... allowed kernel code to access the
application parameters pointed to by pointer passing invokation. This
was nominally an 8mbyte/8mbyte split for kernel/application out of
16mbyte virtual address space.

however, this created a big problem for subsystem applications that
were also now in their own unique virtual address space. it became a
lot harder for a subsystem application to be invoked from "standard"
application (running in their unique address spaces) via a pointer
passing call ... and still reach over and obtain the relevant
parameter information.

dual-address space mode was born with the 3033 ... where
semi-privilege subsystem application could be given specific access to
a calling application's virtual address space. part of what prompted
dual-address space in 3033 ... was that the work around for subsystem
accessing parameters had been the establishment of something called
the "common segment" ... bascially each subsystem got a reserved space
in every address space for placing calling parameters that then could
be accessed by the passed pointer. larger installations providing a
number of services had five megabyte common segment (out of every
16mbyte virtual address space in addition to the 8mbyte kernel)
... leaving only 3mbytes for application use.

there was still a performance problem (even with dual-address space)
that the transition from standard application to subsystem application
required an indirect transition through the kernel via a kernel
call. this became more and more an issue as more system library
functions were moved out of standard application space and into their
own virtual address space.

dual-address space was expanded with access registers and program
call/return instructions. basically something close to the performance
of a library branch-and-link ... but with control about semi-privilege
state change as well as switching virtual address space ... but still
also providing access back to the caller's virtual address space.

... from above ...
The authorization mechanisms which are described in this section
permit the control program to establish the degree of function
which is provided to a particular semiprivileged program. (A
summary of the authorization mechanisms is given in Figure 5-5 in
topic 5.4.8.) The authorization mechanisms are intended for use by
programs considered to be semiprivileged, that is, programs which
are executed in the problem state but which may be authorized to
use additional capabilities. With these authorization controls, a
hierarchy of programs may be established, with programs at a higher
level having a greater degree of privilege or authority than
programs at a lower level. The range of functions available at
each level, and the ability to transfer control from a lower to a
higher level, are specified in tables which are managed by the
control program. When the linkage stack is used, a nonhierarchical
transfer of control also can be specified.

50th Anniversary of invention of disk drives

Anne & Lynn Wheeler <lynn@garlic.com> writes:
this trade-off left-over from the mid-60s included having the argument
for the on-disk serial search kept in processor real storage (further
optimizing real storage constraint) ... however it required that there
was a dedicated exclusive i/o path between the device and the
processor real storage for the duration of the search. this further
exasherbated the throughput. typically multiple disks (between 8 to
32) might share a common disk controller and i/o channel/bus. not only
was the disk performing the search, busy for the duration ... but
because of the requirement for the dedicated open channel between the
disk and processor storage (for accessing the search argument) busy
for the duration of the search ... it wasn't possible to perform any
operations for any of the other disks (sharing the same controller
and/or i/o channel/bus).

the characteristic of CKD DASD search i/o operations constantly
referencing the search information in processor memory was taken
advantage of by ISAM indexed files. ISAM could have multiple levels of
indexes out on disk ... and an ISAM channel i/o program could get
extremely complex. the channel i/o program could startoff with an
initial metadata search argument ... which would search for the
argument based on various criteria (less, greater, equal, etc) which
then chained to read operation of the associated data (which could be
the next level metadata search argument) ... and then chained to a new
search operation using the data just read information as its search
argument. all of this could be going on totally asynchronous to any
processor execution.

Tom.Schmidt@ibm-main.lst (Tom Schmidt) writes:
Have the presenter review ancient history in the S/360 line -- the 360 line
generally supported differing page sizes (2K and 4K) and the 360/67
supported 2K, 4K and even 1M page sizes. (I don't recall whether any SCP
shipped that dealt with 1M page sizes, especially in the VERY expensive
storage era of the S360 line though. That could be why the idea lurked for
lo these many years.)

360/67 was the only 360 that supported virtual memory (other than a
custom 360/40 with special hardware modifications that cambridge did
before it got a 360/67). 360/67 supported only 4k pages sizes and
1mbyte segments ... however 360/67 supported both 24-bit and 32-bit
virtual addressing

max. storage on 360/67 uniprocessor was 1mbyte real storage (and a lot
of 360/67 were installed with 512k or 768k real storage). out of that
you had to take fixed storage belonging to the kernel ... so there
would never be any 1mbyte real storage left over for virtual paging.

note that the 360/67 multiprocessor also had a channel director
... which had all sorts of capability ... including all processors in
a multiprocessor environment could address all i/o channels ... but
could still be partitioned into independently operating uniprocessors,
each with their dedicated channels. standard 360 multiprocessor only
allowed sharing of memory ... but a processor could only address their
own dedicated i/o channels. the settings of the channel director could
be "sensed" by settings in specific control registers (again see
360/67 functional characteristics).

equivalent capability allowing all processors to address all channels
(in multiprocessor environment) and supporting more than 24bit
addressing didn't show up again until 3081 and XA.

370 virtual memory had 2k and 4k page size option as well as 64k and
1mbyte segments.

however, when it was supporting guest operation systems with virtual
memory ... the vm370 "shadow tables" had to be whatever the guest
operating system was using (exactly mirror the guests' tables). dos/vs
and vs1 used 2k paging ... os/vs2 (svs & mvs) used 4k paging.

there was an interesting problem at some customers with doubling of
cache size going from 370/168-1 to 370/168-3. doubling the cache size,
the needed one more bit from the address to index cache line entries
and took the "2k" bit ... assuming that the machine was nominally for
os/vs2 use. however, there was some number of customers running vs1
under vm on 168s. these customers saw degradation in performance when
they upgraded from 168-1 to 168-3 with twice the cache size.

the problem was that the 168-3 ... every time there was a switch
between 2k page mode and 4k page mode ... it would completely flush
the cache ... and when in 2k page mode it would only used half the
cache (same as 168-1) ... and use all the cache in 4k page mode.
using only half the cache should have shown the same performance on
168-3 as on 168-1. however, the constant flushing of the cache,
whenever the vm moved back & forth between (vs1's shadow table) 2k
page mode and (standard vm) 4k page mode ... resulting in worse
performance with 168-3 than straight 168-1.

Tom.Schmidt@ibm-main.lst (Tom Schmidt) writes:
For zSeries to do it you would either be looking at creative use of MIDAW
to read/write the 1M pages from/to existing DASD (with less-then-ideal
performance) or you would be looking at new DASD (or son-of-DASD maybe).
Perhaps it would be a good excuse to resurrect expanded storage (ESTORE)
with an also-resurrected Asynchronous Page Mover (of 1M)?

in the early 80s ... "big pages" were implemented for both VM and MVS.
this didn't change the virtual page size ... but changed the unit of
moving pages between memory and 3380s ... i.e. "big pages" were 10 4k
pages (3380) that moved to disk and were fetched back in from disk. a
page fault for any 4k page in a "big page" ... would result in the
whole "big page" being fetched from disk.

note the original expanded store ... wasn't so much an architecture
issue, it was a packaging/technology issue. 3090s needed more
electronic store than could be packaged within the prescribed latency
of cache/memory fetch. the approach was to place the storage that
couldn't be packaged for close access ... on a different bus under
software control that burst transfers in (4k) page size units
... rather than the smaller cache line size units ... and then
leverage the programming paradigm already in place for paging to/from
disk.

this is somewhat LCS from 360 days (8mbytes of 8mic storage
... compared to 750ns storage on 360/67 or 2mic storage on 360/50).
the simple strategy was to just consider it as adjunct of normal,
faster storage and tolerate the longer fetch cycle. however, some
installations tried to carefully allocate stuff in LCS ... that were
lower use programs and/or purely cached data (like hasp buffers).
however, some installactions actually implemented copying programs out
of LCS to faster storage before execution.

Tom.Schmidt@ibm-main.lst (Tom Schmidt) writes:
When I lived in POK I was told that a part of the reason for extended
storage also had to do with its lack of requirement for storage protect key
arrays. The expanded storage memory was then allowed to be more like the
memory of the competitors in terms of cost, structure & simplicity. (By
the early 1990s the competition was considered to be HP, not Amdahl nor
HDS).

say 6bits of storage key per 4k bytes is lost in the noise? (2k
storage keys as well as 2k virtual pages having been dropped around
3081 xa time-frame) ... if you wanted to worry about something
... there was 16bit ecc for every 64bit double word (or 2bits per 8bit
byte ... as opposed to parity bit per 8bit byte) ... optimizations
were trying to get failure coverage (better than simple 1bit/byte
parity) with less than 80bits (for 64bit of data) ... like 78bits,
72bits, etc ...

the claim was that 3090 expanded store memory chips was effectively
the same as regular memory chips ... because ibm had really good
memory yield. however, there was a vendor around 1980 that had some
problems with its memory chip yield involving various kinds of
failures that made the chips unusable for normal processor
fetch/store (memory).

for other drift ... there was a lot of modeling for 3090 balanced
speeds&feeds ... part of it was having sufficient electronic memory to
keep the processor busy (which then led to the expanded store stuff)

part of the issue was using electronic memory to compensate for disk
thruput. starting in the late 70s, i was making statements about disk
relative system thruput had declined by an order of magnitude over a
period of years. the disk division assigned the performance and
modeling group to refute the statement. after a period of several
weeks, they came back and mentioned that i had actually slightly
understated the problem ... the analysis was then turned around into a
SHARE presentation on optimizing disk thruput (i.e. leveraging
strengths and compensating for weaknesses). misc. posting
referencing that share presentation
http://www.garlic.com/~lynn/2001l.html#46 MVS History (all parts)
http://www.garlic.com/~lynn/2006f.html#3 using 3390 mod-9s

one of the issues that cropped up (somewhat unexpectantly?) was the
significant increase in 3880 (disk controller) channel busy. the 3090
channel configuration had somewhat been modeled assuming 3830 control
unit channel busy. the 3830 had a high performance horizontal
microcode engine. for the 3880, they went to a separate processing for
the data path (enabling supporting 3mbyte/sec and then 4.5mbyte
transfers), but a much slower vertical microprogrammed engine for
control commands. this slower processor significantly increased
channel busy when processing channel controls/commands (compared to
3830).

a recent post discussion some of the problems that cropped up during
3880 development (these showed up before first customer ship and
allowed some work on improvement)
http://www.garlic.com/~lynn/2006q.html#50 Was FORTRAN buggy?

however, there was still a fundamental issue that 3880 controller
increased channel busy time per operation ... greater than had been
anticipated. in order to get back to balanced speeds&feeds for 3090
... the number of 3090 channels would have to be increased (to
compensate for the increased 3880 channel busy overhead).

now, it was possible to build a 3090 with relatively few TCMs. the
requirement (because of increased 3880 channel busy) to increase the
number of channels resulted in requiring an additional TCM for 3090
build (for the additional channels) ... which wasn't an insignificant
increase in manufacturing cost. at one point there was a suggestion
(from pok) that the cost of the one additional TCM for every 3090 sold
... should be taken from sanjose's bottom line (as opposed to showing
up against POK's bottom line).

from above:
IBM tried to react by launching a major project called the 'Future
System' (FS) in the early 1970's. The idea was to get so far ahead
that the competition would never be able to keep up, and to have such
a high level of integration that it would be impossible for
competitors to follow a compatible niche strategy. However, the
project failed because the objectives were too ambitious for the
available technology. Many of the ideas that were developed were
nevertheless adapted for later generations. Once IBM had acknowledged
this failure, it launched its 'box strategy', which called for
competitiveness with all the different types of compatible
sub-systems. But this proved to be difficult because of IBM's cost
structure and its R&D spending, and the strategy only resulted in
a partial narrowing of the price gap between IBM and its rivals.

... snip ...

i.e. the 3880 "box strategy" might be construed as sub-optimal from
an overall system perspective.

Anne & Lynn Wheeler <lynn@garlic.com> writes:
in the early 80s ... "big pages" were implemented for both VM and MVS.
this didn't change the virtual page size ... but changed the unit of
moving pages between memory and 3380s ... i.e. "big pages" were
10 4k pages (3380) that moved to disk and were fetched back in from
disk. a page fault for any 4k page in a "big page" ... would result
in the whole "big page" being fetched from disk.

"big pages" support shipped in VM HPO3.4 ... it was referred to as
"swapper" ... however the traditional definition of swapping has been
to move all storage associated with a task in single unit ... I've
used the term of "big pages" ... since the implementation was more
akin to demand paging ... but in 3380 track sized units (10 4k pages).

in the original 370, there was support for both 2k and 4k pages
... and the page size unit of managing real storage with virtual
memory was also the unit of moving virtual memory between real storage
and disk. the smaller page sizes tended to better optimize constrained
real storage sizes (i.e. compared to 4k page sizes, an application
might actually only need the first half or the last half of a specific
4k page, 2k page sizes could mean that the application could
effectively execute in less total real storage).

with the increasing amounts of real storage ... there was more and
more a tendency to leveraging the additional real storage resources to
compensate for the declining relative system disk i/o efficiency.

this was seen in mid-70s with the vs1 "hand-shaking" that was somewhat
done in conjunction with the ECPS microcode enhancement for 370
138/148.
http://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist

VS1 was effectively MFT laid out to run in single 4mbyte virtual
address space with 2k paging (somewhat akin to os/vs2 svs mapping MVT
to a single 16mbyte virtual address space). In vs1 hand-shaking, vs1
was run in a 4mbyte virtual machine with a one-to-one correspondance
between the vs1 4mbyte virtual address space 2k virtual pages and the
4mbyte virtual machine address space.

VS1 hand-shaking effectively turned over paging to the vm virtual
machine handler (vm would present a special page fault interrupt to
the vs1 supervisor ... and then when vm had finished handling the page
fault, present a page complete interrupt to the vs1 supervisor). Part
of the increase in efficiency was eliminating duplicate paging when
VS1 was running under vm. However part of the efficiency improvement
was VM was doing demand paging using 4k transfers rather than VS1 2k
transfers. In fact, there were situations were VS1 running on 1mbyte
370/148 under VM had better thruput than VS1 running stand-alone w/o VM
(the other part of this was my global LRU replacement algorithm and my
code pathlength from handling page fault, to doing the page i/o to
completion was much better than the equivalent VS1 code).

there were two issues with 3380, over the years, disk i/o had become
increasingly a significant system bottleneck. more specifically
latency per disk access (arm motion and avg. rotational delay) was
significantly lagging behind improvements in other system
components. so part of compensating for disk i/o access latency was to
significantly increase amount transfered per operation. the other was
that 3380 increased the transfer rate by a factor of ten while its
access time only increased by a factor of 3-4. significantly
increasing the amount transferred per access also better matched the
changes in disk technology over time (note later technologies
introduced raid that did large transfers across multiple disk arms in
parallel)

full track caching is another approach that attempts to leverage the
relative abundance of electronic memory (in the drive or controller)
to compensiate for the relative high system cost of doing each disk
arm access. part of this starts transfers (to the cache) as soon as
the arm has settled ... even before the head has reached the specified
requested record. disk rotation is part of the bottleneck ... so full
track caching goes ahead and transfers the full track during the
rotation ... on the off chance that the application might have some
need for any of the rest of the data on the track (the electronic
memory in the cache is relatively free compared to the high system
cost of doing each arm access and rotational delay).

there is a separate system optimization with respect to increasing the
physical page size. making the physical page size smaller allowed for
better optimizing relatively scarce real storage sizes. with the shift
in system bottleneck from constrained real storage to constrained i/o
... it was possible to increase the amount of data paged per
operation w/o having to actually going to larger physical page size
(by doing transfering multiple pages at a time ... as in the "big
page" scenario).

there is periodic discussion in comp.arch about advantages going to
much bigger (hardware) page sizes ... 64kbytes, 256kbytes, etc ... as
part of increasing TLB (table look-aside buffer) performance. the
actual translation of a virtual address to a physical real storage
address is implemented in TLB. A task switch may result in the need to
change TLB entries ... where hundreds of TLB entries ... one for each
application 4k virtual page may be involved. For some
loads/configuration, the TLB reload latency may become a significant
portion of a task switch elapsed time. Going to much larger pages
sizes ... reduces the number of TLB entries ... and possible TLB entry
reloads ... that are necessary for running an application.

a hardware token can represent something you have technology and a
password can represent something you know technology. typically
multi-factor authentication is considered more secure because the
different factors have different/independent vulnerabilities
(i.e. pin/password considered countermeasure to lost/stolen token,
modulo not writing the pin/password on the token).

where the token validates using static data (effectively a kind of
pin/password). the static data can be skimmed and used to create a
counterfeit token. the yes card operation involves the
infrastructure validating the token ... and then asking the token if
the entered pin was correct. the counterfeit yes cards are
programmed to always answer YES, regardless of what pin is entered.

however, it is possible that the way that the token validates itself
is via some sort of one-time password technology (as opposed to some
purely static data technology). in such a situation, the one-time
password isn't independent of the token ... it is equivalent to the
token (and therefor doesn't represent multi-factor authentication).

another possible variation is using the token to transport
information used for authentication. in the yes card scenario, the
token was used for both transporting and verifying the user's PIN
... however there wasn't an independent method of verifying that the
user actually knew the PIN ... which in turn invalidated the
assumption about multi-factor authentication having
different/independent vulnerabilities (and therefor being more secure)

in the following reference discussion about electronic passports, the
token is used to carry personal information that can be used for
something you are authentication (guard checks the photo in the
token against a person's face). the issue here is a question about the
integrity of the information carried in the token (can it be
compromised or altered). however, the token itself doesn't really
represent any kind of something you have authentication (it
purely is used to carry/transport the information for something you
are authentication)
http://www.garlic.com/~lynn/aadsm25.htm#32 On-card displays

edgould1948@ibm-main.lst (Ed Gould) writes:
It would be interesting, I would think to have the "old timers"
compare the code that was used in the "old days" against what is used
today.

The code I think has been recoded many a time. Do you think the new
people could show the old people new tricks or would it be the other
way around?

some of this cropped up during the early days of os/vs2 svs
development.

at the time, cp67 was one of the few relatively successful operating
systems that supported virtual memory, paging, etc (at least in the
ibm camp). as a result some of the people working on os/vs2 svs was
looking at pieces of cp67 for example.

one of the big issues facing transition from real memory mvt to virtual
memory environment was what to do about channel programs.

in virtual machine environment, the guest operating system invokes
channel programs ... that have virtual addresses. channel operation
runs asynchronously with real addresses. as a result, cp67 had a lot of
code (module CCWTRANs) to create an exact replica of the virtual
channel program ... but with real addresses (along with fixing the
associated virtual pages at real addresses for the duration of i/o
operation). these were "shadow" channel programs.

svs had a compareable problem with channel programs generated in the
application space and passing the address to the kernel with
EXCP/SVC0. the svs kernel now was faced with also scanning the virtual
channel program and created a replica/shadow version using real
addreses. the initial work involved taking CCWTRANS from cp67 and
crafting it into the said of the SVS development effort.

one of the other issues was that the POK performance modeling group
got involved in doing low-level event modeling of os/vs2 paging
operations. one of their conclusions ... which I argued with them
about ... was that replacing non-changed pages was more efficient than
selecting a change page for replacement. no matter how much arguing
they were adament that on a page fault ... for a missing page ... the
page replacement algorithm should look for a non-changed page to be
replaced (rather than a changed page). This reasoning was that
replacing a non-changed page took significantly less effort (there was
no writing out required for the current page).

the issue is that in LRU (least recently used) page replacement
strategy ... you are looking to replace pages that have the least
likelyhood of being used in the near future. the non-changed/changed
strategy resulted in less weight being placed on whether the page
would be needed in the near future. this strategy went into svs and
continued into the very late 70s (with mvs) before it was corrected.

these days there is a lot of trade-off trying to move data between
memory in really large block transfers .... and using excess
electronic memory to compensate for disk i/o bottlenecks. in the vs1
handshaking scenario ... vs1 letting vm do its paging in 4k blocks was
frequently signifantly more efficient than paging in 2k blocks (made
less efficient use of real storage, but it was a reasonable trade-off
since there was effectively more real storage resources than there
were disk i/o access resources).

later "big pages" went to 40k (10 4k page) 3380 track demand page
transfers. vm/hpo3.4 would typically do more total 4k transfers than
vm/hpo3.2 (for the same workload and thruput) ... however, it could do
the transfers with much fewer disk accesses; it made less efficient
use of real storage, but more efficient use of disk i/o accesses
(again trading off real storage resource efficiency for disk i/o
resource efficiency).

... or somewhat reminiscent of a line that I started using as an
undergraduate in connection with dynamic adaptive scheduling;
schedule to the (system thruput) bottleneck. misc. past posts
mentioning past dynamic adaptive scheduling work and/or the resource
manager
http://www.garlic.com/~lynn/subtopic.html#fairshare

Brian Inglis <Brian.Inglis@SystematicSW.Invalid> writes:
Had to be reduced to 9 pages (36KB) because the 3880/3380 would miss
the start of the next track (RPS miss) on a chained multi-block big
page transfer because of overhead.

processing latency ... this was if you would to do multiple
consecutive full track transfers ... with head-switch to different
tracks (on the same cycliner; aka arm position) w/o loosing
unnecessary revolutions ... aka being to do multiple full track
transfers in the same number of disk rotations.

which met that it was taking longer elapsed time between commands
... while the disks continued to rotate.

there had been earlier studied in detail regarding elapsed time to do
a head switch on 3330s ... in order to read/write "consecutive" blocks
on different tracks (on the same cylinder) w/o unproductive disk
rotation. intra-track head switch (3330) official specs called for
a 110 dummy spacer record (between 4k page blocks) that allowed time
for processing the head switch command ... while the disk continued
to rotate. the rotation of the dummy spacer block overlapped with the
processing of the head switch command ... allowing the head switch
command processing to complete before the next 4k page block had rotated
past the r/w head.

the problem was that 3330 track only had enuf room for three 4k page blocks
with 101-byte dummary spacer records (i.e. by the time the head switch
commnad had finished processing, the start of the next 4k record had
already rotated past the r/w head).

it turns that both channels and disk controllers introduced processing
delay/latency. so the i put together a test program that would format
a 3330 track with different sized dummy spacer block and then test whether
a head switch was performed fast enuf before the target record had rotated
past the r/w head.

i tested the program with 3830 controllers on 4341, 158, 168, 3031,
3033, and 3081. it turns out that a 3830 in combination with 4341 and
370/168, the head switch command processed within the 101 byte
rotation latency.

combination of 3830 and 158 didn't process the head switch command
within the 101 byte rotation (resulting in a missed revolution). the
158 had integrated channel microcode sharing the 158 processor engine
with the 370 microcode. all the 303x processors had a external
"channel director" box. the 303x channel director boxes were a
dedicated 158 processing engine with only the integrated channel
microcode (w/o the 370 microcode) ... and none of the 303x processors
could handle the head switch processing within the 101 byte dummy
block rotation latency. the 3081 channels appeared to have similar
processing latency as 158 and 303x channel director (not able to
perform head switch operation within 101 dummy block rotation).

i also got a number of customer installations to run the test with a
wide variety of processors and both 3830 controllers and oem clone
disk controllers.

jmfbahciv writes:
Yes. It was very crude. When hardware was as iffy as it used
to be, being able to swap disk drives was nice to do. In addition
that's what SMP was starting to allow with _any_ piece of gear.
Remember the RP04s and RP06s had removable drive number plugs?

started running into this problem as they started acquiring customers
all around the world (early to mid 70s) ... and were faced with
providing 7x24 service.

one of the increasing problem issues was that the field service people
needed to take over a machine once a month (or sometimes more often)
for service (and with 7x24 operation ... traditional weekend sat or sun
midnight period was becoming less and less acceptable). at least some
of the service required a whole system infrastructure .. where they
would run various kinds of stand-alone diagnostics.

to compensate, they ran loosely-coupled (cluster) configurations and
added software support for process migration across processors in
cluster. they even claimed to being able to migrate a process from a
cluster in datacenter on the east coast to cluster in datacenter on
the west cost ... modulo amount of context/data that was required ...
back in the days of 56kbit telco links.

now, fast reboot had already been done back in the late 60s for cp67
... was cp67 systems were starting to move into more and more critical
timesharing (and starting to offer 7x24 service). this then carried
forward into vm370.

cp67 had been done on 4th flr of 545 tech sq, multics on 5th flr of
545 tech sq ... and for some reason i believe MIT USL was in one of
the other tech sq bldgs (across the courtyard). tech sq had three 10
story bldgs (9 office floors, there was 10th?) forming a courtyard
... with two-story Polaroid bldg on the 4th (street) side (i've told
before 4th floor science center overlooked land's balcony and once
watching demo of unannounced sx-70 being done on the balcony).

the cause of the multiple cp67 crashes was a local software
modification that had been applied to the USL system. I had added
ascii/tty support to cp67 when i was undergraduate at the univerisity
... and played some games with using one byte values. the local USL
modification was to increase the maximum tty terminal size from
80chars to something like 1200(?) for some sort of new device (some
sort of plotter?) over at harvard. the games with one byte value
resulted in calculating incorrect lengths if the max. line size was
increased past 255 (which then resulted in system failing).

note that in the above discription ... the (IBM) boston programming
center also shared the 3rd floor of 545 tech sq. when the cp67 group
split off from the science center, they moved to the 3rd flr,
abosrbing the boston programming center. as the group expanded and
morphed into the vm370 group ... it outgrew the 3rd floor and moved
out to the old sbc bldg in burlington mall (vacated when sbc was
sold/transferred to cdc).

At one point the trade press was talking about low cost block oriented
random access memory (BORAM), which would have been a natural for ES.
Unfortunately, that doesn't seem to have materialized, or at least
BORAM failed to maintain an adequate price lead.

... from reference above:
When a chip is b bits (b =|> 2) wide, an access to a 64-bit data word
may have a b-bit block or byte error. There are codes to variously
correct single b-bit errors and detect double b-bit errors. For G3 and
G4, a code with 4-bit correction capability (S4EC) was implemented.
Because the system design included dynamic on-line repair of chips
with massive failures, it was not necessary to design a (78, 64) code
which could both correct one 4-bit error and detect a second 4-bit
error (D4ED). Such a code would have required an extra chip per
checking block. The (76, 64) S4EC/DED ECC implemented on G3 and G4 is
designed to ensure that all single-bit failures of one chip (and a
very high probability of double- and triple-bit failures) occurring in
the same doubleword as a 1­ 4-bit error on a second chip are detected
[15]. G5 returns to single-bit-per-chip ECC and is therefore able to
again use a less costly (72, 64) SEC/DED code and still protect the
system from catastrophic failures caused by a single array-chip
failure.

... from above
Both the central and expanded storages have error-correcting codes. The
central storage has a single error-correcting, double-error-detecting
code on each double word of data. The code is designed to detect all
four-bit errors on a single card. The correcting code is passed to the
caches on a fetch operation so that it can cover transmission errors
as well as storage-array errors. The expanded storage is even more
fault-tolerant. Each quad-word of the expanded storage has a
double-error-correcting, triple-error-detecting code. Again, a
four-bit error is always detected if caused by a single-card-level
failure.

Anne & Lynn Wheeler <lynn@garlic.com> writes:
say 6bits of storage key per 4k bytes is lost in the noise? (2k
storage keys as well as 2k virtual pages having been dropped around
3081 xa time-frame) ... if you wanted to worry about something
... there was 16bit ecc for every 64bit double word (or 2bits per 8bit
byte ... as opposed to parity bit per 8bit byte) ... optimizations
were trying to get failure coverage (better than simple 1bit/byte
parity) with less than 80bits (for 64bit of data) ... like 78bits,
72bits, etc ...

Was FORTRAN buggy?

jmfbahciv writes:
DEC had its first date problem in 1975. The project was called
DATE75 and was a specification in the software notebooks. I don't
know if Al ever got those old specs that have been removed.
It would also have been a DOC file on BLKC:.

tod clock was part of original 370 ... even before virtual memory for
370 had been announced.

i have some memory of spending 3 months in a taskforce/effort
discussing tod clock ... one item was discussing the original
specification that the clock epoch was the 1st day of the century
... and did the century start 01jan1900 or 01jan1901 (and for some
reason, for lot of early internal testing, people repeatingly setting
epoch to neither, but 01jan1970). the other topic of interest that
went round and round was how to handle leap seconds.

Trying to design low level hard disk manipulation program

dgay writes:
I think you missed the point that the output of readdir is (and should
be) unrelated to the order presented to the user. Why is the file system
collating anyway? Now I can see the value of a library that collates
file names according to some system-wide convention...

one of the results of changes original made by (i think ?)
perkin/elmer to cms mfd in the early 70s was to sort the
filenames. then when application was looking for specific filename
... lookup could do much better than linear search (and searches
better than linear were dependent on being matched to collating/sort
sequences).

it really was significant change for directories that happened to have
a couple thousand filenames (some number of high use system).

i recently ran into something similar using sort on filenames and
doing something other than linear search ... where sort command
default collating sequence changed and it moved how period was handled
(showed up betwee capital H and capital I). i had to explicitly set
"LC_ALL=C" to get sort back working the way i was use to.

a similar, but different problem we did long ago and far away ... when
we did online telephone book for several hundred thousand corporate
employees. for lots of reasons ... the names/numbers was kept in
linear flat file ... but sorted. the search was radix ... based on
measured first letter frequency by taking the size of the file and
probing part way into the file based on first letters of the search
argument and the related letter frequencies for names (originally
compiled into the search program). it could frequently get within
appropriate physical record within a probe or two (w/o requiring
separate index or other infrastructure).

we had special collating/sort order assuming that names (and search
arguments) had no blanks (even tho any names with embedded blanks were
carried in the actual data (the ignore blanks was a special sort
charactieristic/option). in the name scenario .. name
collisions/duplicates were allowed ... so search result might present
multiple matches.

Mickey and friends

jmfbahciv writes:
But it wasn't working. From what little I've read so far,
New York was getting big eyes for acquisitions. Mass had
a revolt. I'm pretty sure that Britain and France weren't
staying out of it. I don't know about Spain. I found
a book that supposedly talks about all the various democracy
experiments that were tried by all the states during those
years. It looks like the book is the m volume of an n set
so it may not have the data I want.

from above:
my wife has just started a set of books that had been awarded her
father at west point ... they are from a series of univ. history
lectures from the (18)70/80s (and the books have some inscription
about being awarded to her father for some excellence by the colonial
daughters of the 17th century).

part of the series covers the religous extremists that colonized new
england and that the people finally got sick of the extreme stuff that
the clerics and leaders were responsible for and eventually migrated
to more moderation. it reads similar to some of lawarence's
description of religious extremism in the seven pillars of wisdom.
there is also some thread that notes that w/o the democratic influence
of virginia and some of the other moderate colonies ... the extreme
views of new england would have resulted in a different country.

somewhat related is a story that my wife had from one her uncles
several years ago. salem had sent out form letters to descendents of
the town's inhabitants asking for contributions for a memorial. the
uncle wrote back saying that since their family had provided the
entertainment at the original event ... that he felt that their family
had already contributed sufficiently.

... snip ... and ...
i was recently reading an old history book (published around 1880)
that claimed that it was extremely fortunate that the declaration of
independence (as well as other founding efforts) were much more
influenced by scottish descendents in the (state of) virginia area
... than any english influence from the (state of) mass. area ...
that the USA would be a markedly different nation if more of the
Massachusetts/English influence had prevailed (as opposed to the
Virginia/Scottish influence).

... snip ...

cold war again

wclodius writes:
What was not known untill much later was that the Russians inial
attempt at an ICBM system, required extensive maintenance and was
difficult to fuel quickly. It was the primary ancestoor the their
current spece launch systems, but was a failure as an ICBM system.

originally ROMP was going to be an austin OPD office products
follow-on for the displaywriter. when that got canceled ... the group
look around and decided to try and revive the box as a unix
workstation. they got the group that had done the at&t unix port
for pc/ix ... to do one for romp ... and you got rt/pc and aix.

the palo alto group had been working on doing a Berkeley port to 370.
at some point after the rt/pc first became available, the decision was
to retarget the effort from 370 to rt/pc ... and you got "aos".

there was a little discord between austin and palo alto over aos.

the original austin group was using cp.r and pl.8 for the
displaywriter work. as part of retargeting romp from displaywriter to
unix workstation ... it was decided that the austin pl.8 could
implement a VRM (virtual resource manager, in pl.8). the group that
had done the pc/ix port, then would port to an abstract VRM layer
... rather than the bare metal.

palo alto then did the berkeley port for aos to the bare metal. the
problem was that austin had claimed that the total VRM development
effort plus port to VRM interface was less effort than any straight
port to the bare metal. unfortunately(?), palo alto's port to the bare
metal was done with very few reources and effort.