Why these original FORTRAN quirks?

jmfbahciv writes:
That's wrong. Capital equipment gets written off in a few years.
I'd be more than insulted if a company thought of their
employees as capital equipment. They have it completely upsidedown.
Your employees are supposed to become more valuable to you the longer
they work for you--not less.

in a static or slowly changing environment ... the longer you are with
a company, the more you learn about their operation and the experience
becomes valuable.

in rapidly changing information/knowledge economy ... skills and
knowledge can quickly get stale and/or even obsolete. you now need
people that are continually adaptable/adapting.

in some scenarios this means justifying constant/continuing education
for participants. this does provide opening for the bean counters to
start treating the knowledge/skills analogous to capital
equipment. and in turn, they may start to treat people (with the
knowledge/skills) as equivalent to their knowledge/skills.

you can then fall into the trap that experience hardly counts at all
... and only the latest and greatest new model is worth anything.

ssh - password control or key control?

Ertugrul Soeylemez <never@drwxr-xr-x.org> writes:
With S/Key you wouldn't care, although one possible attack is that the
user hijacks your connection in the background. Probably the most
secure method is still not to login from an untrusted host.

but one of the prime scenario justifications for s/key is where you
aren't carrying anything with you and still supposedly resilient to
replay attacks and man-in-the-middle attacks. however, one mitm-attack
is to inject a number that is much lower than current number of rounds
on file. countermeasure is to validate the server you are talking to
before starting s/key (but again that invalidates one of the major
scenario justifications for using s/key).

standard shared-secret/password requires use of a unique password for
every unique/different security domain. another scenario for s/key is
that a common pass phrase could be used for all security domains
... if different servers used different salts. however any single
server that you deal with could evesdrop and impersonate other
servers, acquire salts used by other servers and then use a much lower
number of rounds in a s/key impersonation (but that invalidates the
assumption that different security domains are secure from each other
using s/key ... but still being able to use the same passphrase).
countermeasure is that the client retain a lot more state information
about previous s/key operations and who they are dealing with. again
that invalidates scenario justifications for using s/key.

by the time you have finished with all the countermeasures ... you
have pretty much invalidated the scenario justifications for s/key (as
opposed to other solutions).

Ertugrul Soeylemez <never@drwxr-xr-x.org> writes:
The whole sense is that you can login safely from an untrusted host,
such that it can possibly get access to your private key (since you need
a clear-text version of it to authenticate), but they still wouldn't be
able to authenticate fully, because they don't know the next (actually
previous) S/Key hash.

registering shared-secret/password with a host (easily vulnerable to
replay attacks) ... and as countermeasure to hosts in different
security domains impersonating you to hosts in other security domains
... every password has to be unique.

so instead the host gives you a salt and some registration iteration
count ... say 1000. you take you pass phrase and the host specific
salt ... and you repeatedly hash it the number of iteration. that gets
registered (in lieu of shared-secret).

next time you are at some random place ... you contact the host
... the send back their salt and one less than the current on-file
iteration count ... you repeat the process used for registration
... except one less this time. the host gets the iterated hash value
and hash it one more time and compare it to the registered value. if
it matches ... it is you, they store the latest value you calculated
and decrement the iteration count.

this is countermeasure to replay attack ... since the same value is
never used twice ... and an evesdropping can't predict the next value
... since hashes don't work (easily) in reverse.

now is s/key supposed to be resistant to both man-in-the-middle
attacks as well as replay attacks?

however, i'm playing man-in-the-middle, i intercept the host's
transmission of the salt aned the interation count and replace the
count with ONE (instead of 999). you iterate only once, and transmit
the single iteration. i intercept the response and repeat the
iteration the original number of times and send it on. i now leave you
alone.

later i impersonate you ... the host is going to transmit the salt and
some iteration count ... typically some value much larger than one. I
don't have your original pass phrase ... but I can still impersonate
you. All i need is the repeated hash iteration value for some
iteration much less than what the host is currently using. effectively
i have an intermediate hash value interaction ... and all I have to do
it resume the hash iteration at the iteration count I intercepted to
generate the iterated hash value specified by the host.

the original claim for s/key justification was that it was resistant
to both passive evesdropping (and replay attacks) as well as
(non-passive) man-in-the-middle attacks. however, a (non-passive)
man-in-the-middle attack can substituted a lower iteration count
... as decribed in the previous post
http://www.garlic.com/~lynn/2006u.html#3 ssh - password control or key control?

so the countermeasure is to carry around a lot more than memory of
single passphrase ... some sort of hardware dongle that keeps track of
various state ... like the last iteration count that I saw ... and the
next iteration count I'm expecting.

however, that invalidates the original justification assumptions for
s/key. futhermore, if i'm going to be carrying around a hardware
dongle ... why can't it include more than simple memory ... but
actually some processing ... like being able to do a digital signature
w/o exposing the private key.

you could also have radius authentication environment upgraded to do
digital signature authentication. you register a public key (in place
of an iterated hash) ... and then do digital signature verification
(instead of iterated hash verification). again w/o needing digital
certificates. misc. past posts mentioning radius
http://www.garlic.com/~lynn/subpubkey.html#radius

you could also do certificate-less kerberos digital signature
authentication. the original kerberos pk-init draft for digital
signature started out being certificate-less ... just recording puhlic
key (in lieu of password) and doing public key authentication very
much like ssh does its operation. it was only later that certificate
mode of operation was added to the kerberos pk-init draft. misc. past
posts mentioning kerberos and/or pk-init
http://www.garlic.com/~lynn/subpubkey.html#kerberos

clicking on the ".txt=nnn" field in the RFC summary retrieves that
actual RFC.

See 1.0 Abstract, 2.0 Overview, and 3.0 Introduction in the above RFC
for a more detailed description of the operation. However, the
overview does say that it protects against external passive attacks and
eavesdropper (i.e. replay attacks) ... but does not protect against
"active attacks" (i.e. the man-in-the-middle attack I described)

Are there more stupid people in IT than there used to be?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Are there more stupid people in IT than there used to be?
Newsgroups: alt.folklore.computers
Date: Thu, 02 Nov 2006 19:44:03 -0700

krw <krw@att.bizzzz> writes:
...until management decrees that they be changed every 60 days. It
takes me over half that to train my fingers and they confuse
easily. As soon as I get them trained I'm getting five "your ____
password will expire in nn days" emails every day.

shared-secrets/password worked better 30-40 years ago when you only
had one or two ... and used them everyday. when you start having
scores or hundreds ... they become impossible to memorize ... and
changing them every 30-60 days aggravates a bad situation.

then you have complex rules about password characteristics to make
them hard to guess ... which also contributes to making them
impossible to remember.

supposedly PIN-debit cards are more secure because they represent
two-factor authentication (card as something you have and pin as
something you know) .... i.e. from 3-factor authentication model
http://www.garlic.com/~lynn/subintegrity.html#3factor
• something you have
• something you know
• something you are

multi-factor authentication is supposedly more secure assuming that
the different factors have different vulnerabilities .... aka pin is
countermeasure to lost/stolen card. however some study claimed that
something like 30percent of debit cards have PINs written on them.
the issue presumably is the massive use of shared-secret paradigm
... that it becomes impossible for people to remember them all.

possibly just slight historical interest at where the arpanet scalling
limits were. somebody had made satirical comments about arpanet in the
late 70s about IMPs requiring 56kbit interconnect because majority of
the bandwidth could be taken up by inter-IMP administrative chatter
(including all the stuff about figuring out what would be the optimal
path for each packet). reaching 100 nodes by jan83 ... would imply
that scalling limit was starting to be reached with a lot less than
100 nodes in the late 70s.

later the comment was that part of the transition off of IMPs and
arpanet protocol (with the switch-over to internetworking protocol on
1/1/83) was being able to get out from under various of the scalling
limitations.

it probably doesn't a whole lot of difference some 25-30 years later
whether the arpanet scalling limits were 100 nodes or 250 hosts.

and endicott. this was software development activity to add "370"
virtual machine support to the cp67 kernel (running on 360/67). this
was before 370s were available ... and their were numerous
architecture differences between virtual memory hardware on 360/67 and
370 virtual memory architecture. the project required putting a lot of
code into the cp67 kernel to recognize the 370 virtual machine
operation and a whole lot of simulating 370 hardware characteristics
that were different than 360/67.

and there were people that cambridge was working with that were doing
arpanet support. for something a little bit different, my rfc index

and select Author in the RFCs listed by section and use browser
find function to get "Winett J." Joel worked on cp67 out at Lincoln
... after cambridge got cp67 up and working, they installed cp67 out
at Lincoln in 1967 (the third cp67 installation was at the university
in jan68, where i was undergraduate).
Winett J.
466 393 183 167 147 110 109

circa 76-78? ... and inexpensive enuf that they were starting to
appear on everybody's desk. at the start of this period, there was
requirement that to get 3270 on person's desk required vice-president
sign-off. we put together a business analysis that the 3yr
amortized/depreciated cost of a 3270 terminal was about the same as a
business phone (which was available on everybody's desk as matter of
course).

are we talking stuff that was ubiquitously available that everybody in
a corporation of several hundred thousand people could have one?

it was later that the ibm/pc and 3270 terminal was about the same cost
... so it was a relatively straight-forward transaction to justify an
ibm/pc as a 3270 terminal replacement ... where it was possible to
have both 3270 terminal emulation and some local desktop computing in
a single physical footprint (and same cost) ... that significant
market segment uptake contributed to enormously to wide proliferation
of the the ibm/pc.

The quote of the month comes from Shaku Atre, president of Atre Interna-
tional Consultants, Inc. in Rye, N.Y. In discussing how DEC is trying to
make headway into the "mainframe" market; "People shouldn't be trying to
get into the mainframe market right now. What DEC should be doing is
looking to its micros since there hasn't been anything OVER THE RAIN-
BOW".

DEC - SPECIFIC
______________

y Since DEC continues to have problems with their RA70 disk drive,
they are now offering another option to perspective MicroVAX 3500
customers. Now a customer can buy a MicroVAX 3500 with two RD-54
disk drives instead of the RA-70s. DEC previously offered the option
of RA-81 drives, but those big drives are not practical in many of-
fice environments (Digital Review 3/21/88).

y The number of DEC-compatible manufacturers has dropped from 36 three
years ago to about 6 today. Fewer new firms are getting DEC cpu and
microprocessor chips. Three of the remaining six sell ruggedized
versions of MicroPDPs, MicroVAX I & 2 and PDP-11s primarily to the
military (Digital Review 1/25/88).

y DEC's "announced" DESKTOP STRATEGY is to connect "MS-DOS, VAX-based
UNIX and Macintosh systems to DEC VAX/VMS systems, and to extend
services of Decnet/OSI to IBM OS/2 desktops" (Notice the use of the
words "Decnet/OSI". Is there any doubt about DEC's committed resolve
to OSI?) (Computerworld 1/25/88).

y DEC's NAS (Network Application Support) strategy is designed to sup-
port connectivity of different systems in one environment, in effect
creating a common interface across then (i.e., positioning DEC as
being the central control point for the entire network). NAS will
support the following protocols; DAP (the Data Access Protocol in
Decnet); Microsoft's SMB (Server Messgae Block), Sun's NFS (Network
File System); Apple's AFP file sharing Protocol; Adobe Systems'
Postscript for desktop publishing; DDIF (Digital Document Inter-
change Format), DEC's document processing internal standard; and SQL
for DB use

y DEC now includes DECnet/PCSA (Personal Computer System Architecture)
client licenses with VAXmates free of charge while at the same time
reducing VAXmate prices as much as 37%. The VAXmate originally sold
for $4,250.00 with an additional $500.00 charge for DECnet/PCSA
(Digital Review 3/7/88).

DEC - GENERAL
_____________

y Evans & Sutherland now markets an option for the newly-announced
VAXStation 8000 workstations that make it possible to view 3-D im-
ages on the screen "in stereo". The technology includes a time-
multiplexed liquid crystal shutter fitting over the display,
polarizing the display for both the left and right eye. To see the
effect, one must wear a pair of (you guessed it) glasses with
polarized lenses. The option costs $11,500.00 (Digital Review
3/21/88).

eugene@cse.ucsc.edu (Eugene Miya) writes:
This really says nothing of the firm Ivan started. Sure at this
time, this was about the time of the infamous Computer Division (the
one great hope to compete with Cray, I saved one, it's a good
argument with Gordon Bell). This says nothing of the main meat of
the firm which is high performance real-time graphics for flight
simulation, and other kinds of simulators (cars, tactical, etc.).

so the (real) flight simulator are probably a couple million? ... mentioned
on the e&s wiki page

with respect to the following, i got some followup emails commenting
about the "400MIP" number, the actual workstation being only 4-5mips
... and the 400MIP comes from aggregating all the special purpose
graphic chips. I would guess that the $100k is on the low-end of
prices for E&S machines?

the original email goes on quite a bit more about lots more details
from the original announcement.

DEC formally announced the VAXstation 8000. Based on their VAX 8250
and MicroVAX II chip sets and the PS390 from Evans & Sutherland, DEC
has developed one of the fastest workstations (over 400 million 32-bit
arithmetic operations per second) with the highest vector drawing
speed (over 500,000 3D vectors/sec) and highest "apparent" resolution
(8192 x 6912) in the industry. This is DEC's first workstation to
offer hi-performance graphics with full 3D functions. In spite of
this, it is still a single-user workstation which lists at almost
$100,000.00, and is limited to 3 RD54 disk drives (total of 477 MB),
obviously expecting customers to use the VAXstation 8000 as a node in
a VAX Local-Area VAXCluster. I was asked why DEC would use 8250 chips
when the new CMOS MicroVAX 3600 chips are available. From a design
standpoint, the 8250 chip was already complete. From a manufacturing
standpoint, the 8250 chips were already manufactured and probably
sitting in a warehouse somewhere, since sales of 8250s have not been
setting the world on fire. And bottom line: I can see the DEC salesman
now - "We already have a 400 MIP workstation, we certainly do not need
any more horsepower at this point".

The following information was downline loaded from the Digital
Electronic Store about the announcement.

DESCRIPTION

High-Performance, 3D, Realtime Graphics Workstation

The VAXstation 8000 is the newest and most powerful member of
Digital's VAXstation family of high-performance graphic workstations.
Developed jointly by Digital Equipment Corporation and Evans &
Sutherland Corporation, the VAXstation 8000 is a high-performance,
full color workstation that can manipulate complex three-dimensional
graphic objects in realtime.

Evans & Sutherland is widely recognized as a leader in computer
graphics technology. The combination of their expertise in computer
graphics with Digital's system and workstation expertise has resulted
in an industry-leading product -- the VAXstation 8000.

The VAXstation 8000 is designed for applications requiring the highest
levels of computer graphics speed and clarity, such as molecular
modeling, fluid dynamics, mechanical computer-aided engineering and
design, manufacturing engineering, command and control, and computer
animation.

With the VAXstation 8000, Digital now extends the range of the
VAXstation family even further, from the low-cost desktop VAXstation
2000, through the VAXstation II/GPX and VAXstation 3200 and 3500, to
the state-of-the-art VAXstation 8000.

HIGH PERFORMANCE GRAPHICS WITH FULL 3D FEATURES

The VAXstation 8000 is a single-user VAXstation based on the VAX 8250
CPU and a high performance graphics subsystem. It is housed in an
compact, desk-side system enclosure. Digital's first system to offer
high-performance graphics hardware with full 3D functions, the
VAXstation 8000 offers the fastest vector drawing speed in the
industry -- over 500,000 3D vectors per second.

Unlike other very high performance workstations, which require special
programming and application interfaces, the VAXstation 8000 supports
the X Window System version 11 (on both the VMS and ULTRIX operating
systems) and PHIGS standards. These high-level interfaces are
standards in the workstation market and provide a fully compatible
link to the other members of Digital's VAXstation family. VAXstation
8000 workstation windowing software also includes an extensive 3D
graphics library in addition to the X Window System.

VAX PHIGS, Digital's new hierarchically-oriented 3D and 2D graphics
software language, enables application programmers to take advantage
of the power and speed of the VAXstation 8000 by using this standard
high-level graphics programming language.

Are there more stupid people in IT than there used to be?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Are there more stupid people in IT than there used to be?
Newsgroups: alt.folklore.computers
Date: Fri, 03 Nov 2006 15:55:52 -0700

greymaus writes:
Schneier went over this in "Applied *".. Here we are changing over
to PIN, the card company sent me out their letter with a number on
it, fixed so they can not be read by holding a light aganst the
envelope, so I am asking if I can still sign for purchases, the lady
says yes, but it will be all PIN next (whenever), I can't decipher
the bloody number (failing eyesight), so she says "Get a younger
person to read it for you".. Hello?.. Best hardware outlet in the
area is going over to credit/password website, another bloody
password to remember.

there has been some issues with the change over to debit cards that
can be used both with or w/o a pin ... where even if you always used
the debit card w/pin ... if it is lost/stolen ... others would be able
to use it w/o pin.

there is separate issue with chip&pin deployments ... where attacker
could skim the chip authentication information ... potentially using
some nearly identical processes that had been used to skim magstripe
information. the chip authentication information is then injected into
a counterfeit card. in the chip&pin deployments, after the chip had
authenticated to the terminal, the terminal then asked the chip if the
correct pin had been entered. the counterfeit cards were programmed to
always repond YES (regardless of the pin entered). this gave rise to
the label yes card ... some number of past posts discussing yes
card vulnerability:
http://www.garlic.com/~lynn/subintegrity.html#yescard

aka once the chip authentication information had been skimmed ... it
wasn't even necessary to find out the correct pin for use with a
yes card

my reference email from
1984
is about custom implementation where one
box rolls over on day 399 and the other box rolls over on day
400. there have been some articles about pieces of shuttle program
have (re)done various stuff with COTS ... which one would expect to roll
over on jan. 1st. the articles seem to imply if they are flying at the
closest in roll over date (which ever components that may be) ... they
would have to temporary "reboot" all boxes ... as if a brand new
mission had started.

To RISC or not to RISC

Rostyslaw J. Lewyckyj wrote:
But this , I think badly, misses the point, because the improvements
in performance in both cases were not due to the choices of coding
language but to the redesign of the logic of how the tasks were done.
Moving to the HLLs may be considered to have yielded other overall
sustem benefits: ease of understanding the logic, ease of
maintainability, and perhaps portability .

Would recoding the old logic in HLL have yielded performance gains?
Would recoding the new logic in assembler have yielded performance
gains? Obviously the answer in the second case is YES, if you even
only did hot spot recoding.

And yes, I do realize that assembler vs HLL may not have been the
reasong for Mr. Wheeler's posting.

given constrained skills and resources ... recoding much more complex
logic in assembler represents relatively marginal (performance)
improvement than recoding much more complex logic in HLL ... but at a
significant increase in effort (in some cases HLL providing
off-the-shelf library functions that just aren't found in the
assembler environment)

one could even make the case that the theoritical recoding in
assembler (of the examples given) would never happen because of the
significant additional of resources required (the old standby
... theoritically there is no difference between theory and practice
... but in practice there is).

long ago and far away (spring 76) i released the kernel resource
manager that was all written in assembler. for some operations i
needed triple word integer math precision (96bit) and it was enormous
pain to code it from scratch using 32bit assembler instructions.

had an underlying theme that doing operating system stuff in higher
level language made a lot of stuff practical that just wouldn't have
happened if done in assembler. I think that the multics
implementation in pli had a similar theme ... i was at the science
center doing a lot of kernel stuff in assembler ... which was on 4th
flr of 545 tech sq
http://www.garlic.com/~lynn/subtopic.html#545tech

it is not theoritically impossible to write parallel code in assembler
... but it apparently is practically impossible to write lots of
parallel code in nearly all of the current, commoningly used languages

IA64 and emulator performance

ranjit_mathews@yahoo.com wrote:
I don't know that there is a SPARC emulator, but there's an IBM
mainframe emulator.

there are a number of commercial mainframe emulator products ... there
is also at least hercules open source implementation
http://www.conmicro.cx/hercules/

which is available on a number of platforms. in some sense, there are
some similarities between the current generations of mainframe
emulators and the majority of the ("real") mainframe implementations in
the 60s, 70s, and thru the 80s ... i.e. microcode engines where the
360/370/390/etc. implementation was microcode running on the microcode
engines.

Why so little parallelism?

Eugene Miya wrote:
The mainframes guys like Lynn and others sort of dug their own
trenches. The UK has its own set of problems going back to the
handling of Turing thread yet again in a.f.c., and Alvey, and
Lighthill, and god knows what other mistakes you guys shoot
yourselves in the foot with. Your baggage.

i'm not sure what you are referring to ... possibly the invention of
the compare&swap instruction and the associated semantics in the late
60s and early 70s ... which was shipped as part of mainframe 370
machines in the early 70s. misc. past posts mentioning smp and/or
compare&swap instruction use for various kinds of parallel processing
operations
http://www.garlic.com/~lynn/subtopic.html#smp

the following reference was a long ways from any mainframe world and
this was started nearly 20 years ago and this particular reference is
to nearly 15 yrs ago related to scaleup work for ha/cmp project
http://www.garlic.com/~lynn/95.html#13

was that rios had no provisions for cache coherency used in smp
parallel environment ... as a result we had to make design trade-offs
for message passing parallelism in cluster scaleup environments ...
while not precluding being able to also support cache consistent
parallelism.

since none of this was remotely mainframe oriented ... i guess you
must be referring to the mainframe trench created with the invention
of the compare&swap instruction and its deployment on 370 machines in
the early 70s ... which would then place all subsequent hardware
designs that implemented instruction and cache coherency semantics
similar to that of compare&swap semantics ... in the same trench???

Actually, the current DataVaults have 42 drives. Though the bus to
the DV is 64 bits wide, it is broken down into a 32-bit data path
inside the DV. There are 32 data drives, 7 ECC drives, and 3 hot
spares, each of which can be switched into any of the other 39
channels.

We also offer double-capacity DVs with 84 drives; no more bandwidth,
just a 2nd tier of drives off of each channel.

... snip ...

And the following for a lot more drift (although also mentioning
datavault) ....

some background on the Almaden reference can be found here:
http://www.almaden.ibm.com/StorageSystems/Past_Projects/TSM.shtml

We just got back from a joint LANL/LLNL/Discos/IBM meeting at Los
Alamos (we also had an earlier Unitree/IBM/ACSC meeting on Tuesday in
LA).

Partly purely as coincidence, on the flight down, I ran into xxxxxx
with two of his people on their way to Tucson to discuss the Almaden
workstation/pc backup/archive facility. It seems to be a coincidence
since one of the things brought up at the LANL meeting was a vendor
that has had a similar product (with a MVS backend) on the market. The
******* comment was that this vendor now has the product ported to an
Unitree (server) backend. Functions appear to be identical, providing
client registration with archive/backup policy requests that are then
scheduled (potentially offshift on a regular basis) ... with the
server "pulling" the files ... not the client pushing the files.

We discussed the RFT/LCMP enhancements for concurrent Unitree
operation on multiple server platforms (sharing same physical disk
drives). Also repeated some of the discussion that we had a couple
weeks ago with ******* research about some possible near-term
technology enhancements. Also has some discussion about the ANT stuff
that ****** did for Univ. of Mich. IFS system and the discussions that
have occured regarding all of their stuff on Unitree/RFTLCMP platform.
Including the advantages of having NFS, AFS3, AFS4 (and possibly other
protocols) all pointing to the SAME IEEE MSS-2 bitmap files. This
(along with automatic Unitree archiving facility) is also what many of
the other major customers are asking for with respect to Andrew/OSF.

We were able to come away pretty much addressing all of their
requirements except for a near-term tape backup for their connection
machine. The HIPPI D2 & tape-array products aren't yet on the
market ... so they have an interim solution that would utilize four
3490s driven in parallel as a logical tape array. Because of the
interim nature of the requirement, it didn't seem to be worthwhile to
address it with software in the RS/6000 (either via a RS/6000 adapter
card emulating 370 control unit or by one of the NSC A51x remote
device adapters ... its been going on eight years since I've done much
NSC A51x remote device programming).

The connection machine is writing data to the data vault (a box with
32-drive disk data array, 7 ECC drives, and 3 spare drives that are
electronically switchable into the configuration). The requirement is
to periodically back data from the data vault (out over the HIPPI
interface) to tape (files that are currently on the order of 10s of
gigabytes, but growing eventually to potentially terabytes).

The proposed solution is that LANL go ahead and write a 3490 parallel
tape driver for MVS that is a Unitree agent. Unitree (running on
rs/6000) would control all standard operations ... but there would be
a new feature added allowing data & control paths to be separated
... with a modified "high-performance" Unitree FTP program running on
the CM's SUN machine (CM external interfaces are controlled thru Sun
machines). It would talk to the Unitree RS/6000 server code, but
requesting parallel tape migration. The RS/6000 would notify the MVS
parallel tape agent ... and there would be a direct HIPPI data path
set up between the data vault to the MVS/3090 for actually moving the
data. In effect, the 3090/MVS system would be operating as a tape
controller for the Unitree/RS6000 system.

As another aside, ********* commented that Convex has given all of
their Unitree high-performance enhancements back to ******* for
incorporation into the standard product.

And for one of my favorite subjects ... that I also brought up when we
were dealing with the NCAR/Mesa possibilities ... was the significant
advantages of using a Semantic Net in the Unitree Nameserver (i.e.
effectively the function that implements the file directory).
******** and some of the other people wanted to specifically follow-up
on that. I related one of the things that we had looked at in the
NCAR/Mesa timeframe regarding all the NASA tapes containing images of
the back side of the moon ... and the difficulty in being able to find
any specific image &/or images that fit specific criteria.

it wasn't a "camp" thing ... there was proposal from several of the
NSF funded supercomputing centers (CNSF, NCSA, PSC, SDSC) for NSF
funding for evaluation and selection of a common mass storage archive
solution .... which strongly leaned towards Unitree on Convex
platform. Just another one of those things that was happening in the
transition from strictly proprietary software to more open
environment.

We got pulled into situation to push unitree on rs6000 as an
alternative solution.

Tina wrote:
AOS: The next big thing in data storage
Globalization has brought newer business challenges in the enterprise.
Issues like better corporate governance and compliance (be it
Sarbanes-Oxley in the US that requires timely and accurate financial
reporting, or Clause 49 in India) have taken the center stage.
Enterprises are coming under increased pressure to reduce operational
risks; increase business efficiency and create more business value.
http://www.technologyone.blogspot.com/2006/10/aos-next-big-thing-in-data-storage.html

... and as mentioned in the above ... aos actually started out as the
palo alto acis group doing a port of bsd to 370 ... i helped them find
a 370 C compiler for the port. when the port was retarged from 370 to
romp ... they kept the same C compiler (but had a different backend
implemented)

When did computers start being EVIL???

Tim Shoppa wrote:
Even before that there was a sense of beauracracies being unable to
manage the complexity/scope/schedule/requirements of large computer
systems. There were at least a couple of big military-industrial
projects (FAA, SAGE, etc.) that had spun out of control and some
reining in was being done by the early 60's.

Universal constants: world wars

Eugene Miya wrote:
Well it's amazing how Chuck was able to glean all those figures in
the Cold War. I just borrow George's copy every now and again.
Chuck used to live in Mountain View where I live, I think he's
further S now (easily web searchable). Pike would likely know.

from above:
Collapse: How Societies Choose to Fail or Succeed analyzes five ancient
societies that imploded horribly, attempting to find insights useful
for the modern, increasingly interconnected world.

....

Businesses being societies too, I thought it would be worthwhile trying
to apply Diamond's conclusions to the world of business (did you wonder
how I was going to justify the tax write-off for this book?

Alan_Altmark@ibm-main.lst (Alan Altmark) writes:
Since we, as a group, are unable to agree that XEDIT is the Best
Mainframe Editor Ever (just bait - don't take it), I don't see how
we will agree on what constitutes Good Programming. In any
language. We can all construct programs that everyone will agree
are just fine, and others everyone will agree are horrible.
Naturally the world is filled with programs and programmers that
live in the middle. Big deal. In fact, I HOPE we don't all agree.
If we did, then how would we grow as programmers?

now that you mention it ...

some email exchange with the RED author when i was trying to get
Endicott to release RED (in lieu of XEDIT). Later the Endicott
position was that it was the RED author's fault that he had
implemented RED before XEDIT and had put more features in RED than
were eventually implemented in XEDIT ... and therefore it was the
fault of the RED author that he had done something so bad ... it was
his duty to fixup XEDIT with the additional features that he had put
in RED.

I have a feeling that I may have done you a disservice. When I was in
Endicott last May I told them they ought to be putting out RED instead
of XEDIT. I went into a lot of detail about how it was faster, more
function, better, etc. (although other people may have also) and left
them with documentation. XEDIT is now announced and is going
out. Since last May they have gone over the RED documents and have
added several of the items to XEDIT but it still isn't as good (except
maybe in the feature of being able to use EXEC2 which would be common
between edit environment and others).

When I talked to several people from Endicott at the internal meeting
and Share, they sort of acknowledged the above. My suggestion that
they scrap XEDIT and put out RED was received very lightly. (ignoring
the fact that they should have put out RED instead of XEDIT) their
feeling is that it is the duty of the RED author to enhance XEDIT to
include all RED features. That really floored me, Endicott seems to
have some perverted view of reality. It isn't their fault they put out
XEDIT instead of RED, but it is your fault that you put in more
features in RED than were in XEDIT.

... and then from earlier, not functional, just performance numbers
*EDIT* is the old, cms standby; the first cp67/cms implementation was
strictly disk-to-disk work files ... but fairly quickly was enhanced
to take advantage of virtual memory for its operation.

Those are interesting figures on editors ... but I figured that a
better measure of "internal peformance" would involve something more
strenuous than "bottom". So I did a "bottom" by doing a "c / / / * *"
with all seven editors, with the following results:

i use to do my own internal system release/distribution with lots of
fixes and enhancements ... bldgs 14&15 refers to the disk engineering
lab and disk product test lab ... misc. postings here
http://www.garlic.com/~lynn/subtopic.html#disk

current schedule is to ship PLC/LTR 8 CP system next Monday. All (new)
bugs have been identified and fixed. System is currently running in
building 14&15. It is scheduled to be brought up here either tomorrow
or the next day. CP changes includes some conversion to CSL20 & a
number of updates from the current STL release 6 updates. CMS will
include RED 3.4 & XEDIT.

It would be helpful if all locations outside of the San Jose area ship
me a tape and mailing label if possible.

I got your script file (i haven't read it yet). I couldn't follow
your msg about 64K blocks & what I could do with them. Please
amplify. I have done nothing toward making RED sharable, as it seems
to be a big job.

I had made small extension to control information generated by the cms
"GENMOD" command to include shared segment specifications. This was
automatically used by the cms "LOADMOD" function (when loading a cms
executable image) to map any specified shared segments. Aa part of the
original CMS shared segment changes, I had modified the standard cms
editor and several other routines to be able to execute in read-only
protected shared segment (this changes were included in some of those
picked up as part of DCSS release). This particular exchange with the
author of the RED editor was concerning modifying the RED executable
image to reside in a read-only protected shared segment. old discussion
changes to genmod/loadmod:
http://www.garlic.com/~lynn/2006o.html#53 The Fate of VM - was: Re: Baby MVS???

various collected posts about modifying and/or creating code for
residing in shared segments ... including issues with os/360
convention relocatable address constants creating difficulty for
allowing the same shared segment image to reside concurrently at
different virtual addresses in different virtual address spaces
(i.e. when i was modifying code to reside in read-only protected
shared segment, i also attempted to removed the execution location
dependencies ... most freqently involving os/360 convention
relocatable address constants)
http://www.garlic.com/~lynn/submain.html#adcon

A feature that RED editor added early was support for automatically
generating sources changes as CMS update files. When I was an
undergraduate starting out working on cp67/cms, i was making an large
number of source code changes as CMS update files. The CMS update
command convention was similar to IEBUPDATE, "./" replace, insert,
delete, etc, functions ... based on sequence numbers in the original
source file. However, the "new" source had to have the sequence
numbers manually keyed in cols. 73-80. This became quite tedious, so i
created the preprocessor convetion with $ field on the update "./"
control statements. I wrote a preprocessor to the CMS update command
that preprocessed a source update, stripped off the $ and generated
a temporary update working file with all the new source statements
with the sequence numbers automatically added.

The $ convention was later adopted by the multi-level source
update process that was created during the l/h/i effort ... joint
effort between science center and endicott to had 370 virtual
machine support co cp67 kernel (running on real 360/67). recent
posting about that effort:
http://www.garlic.com/~lynn/2006u.html#7 The Future of CPUs: What's After Multi-Core?

eugene@cse.ucsc.edu (Eugene Miya) writes:
Sure it was a camp.
These guys were Cray sites who went along with the DOE's CTSS OS and
UniTree as an afterthought. So when Unicos came along and CTSS was not
portable enough it left CTSS basically dead.

I'm sorry, i misunderstood your original comment to be referring to
convex being part of some "camp" vis-a-vis their own proprietrary
solution ... and i thot i was replying that it was my impression that
it wasn't a "camp" thing for convex ... it was something that some
customers may have wanted to use with convex ... and convex was
responding to something their customers wanted.

i didn't mean to imply that there weren't a number of solutions and that
customers might be choosing particular solutions ... and then the
customers might be have preferrences for one solution or another
("camp" if you will) ... aka
http://www.garlic.com/~lynn/2006u.html#20 Why so little parallelism?

where:

Eugene Miya wrote:
This implies Convex was in UniTree's camp. Convex had their very
storage manager CSM which was not a bad system but never caught on.
Too bad it never got along past 2.0.

i was trying to distinquish between the customers may have wanted
something on a convex platform ... vis-a-vis convex taking a position
possibly with respect to what their customers should want. i wouldn't
view convex cooperating with their customers as to a particular
solution necessarily meaning that it was a "camp"/membership thing for
convex (and didn't mean to imply anything at all about whether or
not it might or might not be a "camp" thing for the customers).

Unitree(/lincs) is basically one of the four that all evolved around
the same time. The other three being CFS(LLNL), Mesa(NCAR), and
NAStore .... reference:

Newsgroups: comp.unix.large
Date: 15 Apr 92 19:18:07 GMT

As it was mentioned in an earlier posting, let me add a couple of
words on NAStore.

NAStore is a system to provide a Unix based, network connected file
system with the appearence of unlimited storage capacity. The system
was designed and developed at NASA Ames Research Center for the
Numerical Aerodynamic Simulation program. The goal was to provide
seemingly unlimited file space via transparent archival (or migration)
of files to removable media (3480 tapes) in both robotic and manual
handlers. Supported files sizes exceed the 2 gigabyte limit on most
systems. Archived data is restored when accessed by the user with
each byte being available as soon as it is restored rather than having
to wait for the whole file as is the case with other archival systems.

The NAStore system has been used here for 3 years and is under ongoing
development. It is based upon Amdahl's UTS and runs upon an Amdahl
5880. We have 200 gigabytes of on-line disk and 6 terabytes of
robotic tape.

If your care for more information, let me suggest the following reading:
1989 Winter Usenix Conference Proceedings, see the article on RASH
the last IEEE Mass Storage Symposium proceedings

If you are still interested, contact Dave Tweten, e-mail tweten@nas.nasa.gov

Brian Inglis <Brian.Inglis@SystematicSW.Invalid> writes:
Most systems seem to have come with more than one editor. Each
editor does some things better than others. People have different
things they do with editors and can pick the one that does their
task best. Why have IBM mainframe systems stuck to the single
systems product editor approach?

one might conjecture that it might have something to do with internal
wars that frequently verged on the religious and being able of all
parties to escalate to a higher authority.

EDGAR was the first cms fullscreen editor (that made it out as a
product) ... other than the minor enhancements to the original cms
(line) editor. there then were/came a whole slew of internal editors
that had all sorts of fullscreen features and scripting/macro
capabilities. then came the wars to see which one would be announced
as the (corporate) product ... with the EDGAR faction attempting to
protect their initial position ... and many of the others all jockying
for permission to be announced. Once one had achieved that permission
... then there was constant ongoing battles to stifle any competition.

even the virtual machine effort, cp67 and vm370 (including cms), had a
nearly constant cloud hanging over them of being terminated because of
being declared "non-strategic" (and competitive) by the "mainstream"
operating system. i've mentioned before the vm370 development group
being periodically told they had made their last product shipment and
their last release. At one point, the whole vm370 product group
location (in burlington mall) was shutdown, and everybody told that
they had to move to POK where everybody would be supporting mvs/xa
development (and one more time, there would be no more virtual machine
operating system).

"Dunny" <paul.dunn4@ntlworld.com> writes:
Kind of, as I understand it JIT is a process of interpretation which blends
in commonly-executed code chunks as native blocks - a kind of cache. This
behaviour makes it much faster than simple interpretation, but still not as
fast as native code.

I remember this sort of trade-off first being looked at circa 1980
with fort knox. the low/mid range 360/370 had all been implemented in
microcode ... with nearly every model having a completely different
microcode engine. 801/risc was being proposed as a common architecture
to replace the large numbers of microprocessors in use by the
corporation. there then was efforts to look at various JIT techniques
as part of the transition to 801/risc. this effort eventually
floundered for one reason or another ... although the claim could be
made that it sort of re-emerged with the transition of the as/400 from
a cisc engine to a powerpc engine. misc. 801, risc, power, somerset,
etc postings
http://www.garlic.com/~lynn/subtopic.html#801

eugene@cse.ucsc.edu (Eugene Miya) writes:
I wonder how the storage war is going to shape up?
Will tape really die as Jim Gray predicts or will tape drives be
relgated to data retrieval devices or one last read off tape?

part of the tape issue is packaging/density/convenience/cost ... for
some things datacenter and transmission costs have been reduced to the
point where it is cost effective to have a hot remote/redundant sites
... as opposed to disaster/recovery dependent on offsite backup tapes.

at that time, she saw very little uptake ... except for the IMS
(database) group for IMS hot standby. not that long ago, we were
talking to one of the major financial transaction operations and they
attributed their 100percent availability over a span of years to

"mike" <mike@mike.net> writes:
On both CISC and RISC versions of the AS/400 - Series i all programs
are first converted to an intermediate "W" code which common to the
high level virtual machine of all models. W code is then compiled,
with optimization, to the physical processor instruction set. The
full list of W code is stored along with the actual hardware specific
executable to allow re-compiles as needed. This is how the same
"program object" can run with reasonable efficiency on the 16 bit
Sys/38 the original 32 bit CISC /400 and the latest 64 bit Power
processor. This is not much like JIT.

it was topic drift ... i didn't claim that the final move of as/400
from cisc to risc was like jit ... i claimed jit was like some of the
stuff that went on for fort knox in the 1980 time-frame ... and that
parenthetically, the (much later) conversion of as/400 from cisc to
risc met some of the original objectives of fort knox ... i.e. moving
to risc based processor (but I didn't claim that the movement of
as/400 from cisc to risc used any of the earlier jit-like stuff that
went on in the 1980 fort knox time-frame ... which were much more
specifically related to 360/370).

however, in the 1980 time-frame for fort knox, the rochester
microprocessor was one of the targets for switch-over to 801 (even if
none of the jit-related activity may have been involved in any of the
possible conversion of rochester microprocessor to 801).

it wouldn't have been directly applicable to as/400 in any case,
since as/400 wasn't introduced until 1988, long after fort knox
effort had been killed ... wiki entry for as/400
https://en.wikipedia.org/wiki/AS/400

references mentioning that fort knox eventually at least partially
succeeded with the as/400 risc
http://www.itjungle.com/tfh/tfh120709-story08.html

and more detail here about fort knox objectives (including more
than rochester microprocessors)
http://www.riteapproach.com/book/ibm.html

for misc. other background ... i had been doing some work with the los
gatos vlsi lab (bldg. 29) in the fort knox time-frame. they were
doing a 32-bit 801 (blue iliad) design (that never shipped). they also
had two people doing a 370 pascal compiler for use in developing
various chip design tools (and which eventually shipped first as the
pascal iup ... and then as the vs/pascal compiler).

i had also written a pli program in the early 70s that analyzed
(360/370) assembler program listings ... creating high level
abstractions of the instructions, recreating control flow, register
useage ... looking for use before set type scenarios ... and generated
a high level psuedo code representation of the assembler program.

some of the people looking at possibly jit-type operations applied to
370 code (using 801 for the followon for 4331/4341) contacted me about
my program ... possibly enabling them to use some of it for what they
were doing (at the time, the primary 801 programming language was
pl.8, a subset of pli). recent posts discussing some of this with
regard to 360/370
http://www.garlic.com/~lynn/2006p.html#4 Greatest Software Ever Written?
http://www.garlic.com/~lynn/2006r.html#24 A Day For Surprises (Astounding Itanium Tricks)

and for the other drift about the 370 pascal compiler, one of the
people in bldg. 29 doing the 370 pascal compiler left and became head
of software at MIPs. after MIPs was bought by SGI, he shows up as
general manager of the SUN business unit that includes JAVA.

... some of which, I believe may have also implemented JIT technology.

for other drift ... doing search engine for as/400 and fort knox also
turns up this article
http://www.cbronline.com/article_cbr.asp?guid=2C99DD1B-6812-46AF-AF95-B6D7E728376A

from above:
The failed Fort Knox project of the early 1980s, meanwhile,
attempted to merge IBM's low-end mainframe and midrange minicomputer
lines into a single family that ran on a variant of IBM's first RISC
processor, the 801.

Many of the architectural ideas behind Fort Knox ended up in the S/38
and AS/400 lines, and those ideas - such as a high-level machine
interface that separates hardware from the operating system (OS) -
have allowed the IBM midrange to be extended for two decades while
providing application software compatibility.

the executive we reported to, left to head up somerset ... the effort
to take power/801 and do the power/pc (he later left that to become
president of MIPs).

predating somerset/powerpc effort was design for 64bit 801/power
... that went on in conjunction with rochester. however, there was
quite a bit of argument with rochester over the 65th bit ... a
tag-line that they wanted to use for implementing s38/as400 storage
access feature.

Steve_Thompson_TW@STERCOMM.COM (Thompson, Steve , SCI TW) writes:
But as I said in a prior posting, ALC is a low level language. So
perhaps those of us that have programmed in it for years just think
quite differently (I started on S/360 machines). Don't get me wrong, I
am not interested in GATE/TEST coding -- Macrocode at Amdahl was close
enough to the bare metal.

this came up in connection with project that was going to migrate the
plethora of internal microprocessors to 801/risc (including ones used
in low & mid range 370s) ... and the possibility that they might use
some JIT technology ... a dynamic alternative to things like were done
for ECPS (statically migrate high-use kernel pathlength into
microcode).

old email from somebody that I worked with on ecps ... discussing
macrocode (he mentions that he had wanted to do something similar for
quite a while)
http://www.garlic.com/~lynn/2006p.html#42 old hypervisor email

lists@AKPHS.COM (Phil Smith III) writes:
This is fairly OT, but the key graf is:
"When the shuttle's flight control software was developed in the
1970s, NASA managers did not envision the possibility of flying
missions during the transition from one year to the next. Internal
clocks, instead of rolling over to Jan. 1, 2007, would simply keep
counting up, putting them at odds with navigation systems on the
ground."

this is part of (somebody's) old
email from 1984
(discussing a number
of calender roll-over problems):
3. We have an interesting calendar problem in Houston. The Shuttle
Orbiter carries a box called an MTU (Master Timing Unit). The MTU
gives yyyyddd for the date. That's ok, but it runs out to ddd=400
before it rolls over. Mainly to keep the ongoing orbit calculations
smooth. Our simulator (hardware part) handles a date out to ddd=999.
Our simulator (software part) handles a date out to ddd=399. What we
need to do, I guess, is not ever have any 5-week long missions that
start on New Year's Eve. I wrote a requirements change once to try to
straighten this out, but chickened out when I started getting odd
looks and snickers (and enormous cost estimates).

... snip ...

where they anticipated roll-over ... but original equipment extended
days in the year out past the end of the year (however, inconsistently
as mentioned to 399, 400, and 999 days).

the issue now appears to be that since 1984, some of the equipment
may have been upgraded to more COTS products ... and which just does
standard roll-over at the end of the year.

remote support questions - curiousity

the other viewpoint was that the software was designed as dedicated,
disconnected tabletop operation ... and allowed numerous applications
(games, etc) to take over the whole machine. a little later the
software was extended to support desktop operations with some local
area business network (non-hostile and non-adversary). it was
designed very well to do what it was intended to do (and in fact a
great deal of countermeasures to machine take-over would have been
counter productive to its original target market).

it was when those pesky users started attaching the product
(originally designed for totally stand-alone operation) to open (and
potentially extremely hostile) networks, that you started having
problems. it is somewhat like taking a Model T and asking why it
doesn't have crush zones, safety belts, airbags, rollbars, safety
glass, padded dashes, headrests, etc.

then automatic scripting (much of which had been originally targeted
at closed, non-hostile, cooperative environments) exploits started to
drastically increase until buffer overflow exploits and automatic
scripting exploits were about equal. the potential for automatic
scripting vulnerabilities was something that had been identified on
the internal network in the 70s.

a couple years ago, there was an estimate that 1/3rd of the exploits
were buffer overflow related, 1/3rd automatic scripting related, and
1/3rd social engineering related.

the latest seems to be a big upswing in phishing ... which can be
considered a form of *social engineering* ... i.e. convincing victim
to do something for the attacker (frequently involves divulging
sensitive information).

a major objective of phishing attacks is to obtain sensitive
information that is frequently used in something you know
authentication (that can be turned around and used by the attacker in
replay and/or impersonation exploits).

or account numbers ... where attackers can turn around and use the
account numbers in transactions requiring little or no additional
information ... misc. posts mentioning account number harvesting for
fraudulent transactions
http://www.garlic.com/~lynn/subintegrity.html#harvest

"Del Cecchi" <delcecchiofthenorth@gmail.com> writes:
With great trepidation I must correct Lynn. Fort Knox was more like 1990
since the processor was to be Iliad. And AS400 was born from the
wreckage of Fort Knox as a descendent of S/38 and "platform" that allowed
S/36 code to run on S/38 like machine. See the book "Silver Lake" or
anything by Frank Soltis.

there were a number of Iliads .... misc. reference from the early
80s.

I also spent some time with one of xxxxx's Iliad design people on
Monday. He is attempting to tackle an on-chip virtual cache. Problem
was how to resolve synonyms ... same real storage addressed by
different virtual addresses.

from above:
The goal of Fort Knox was to unify the hodgepodge of smaller IBM
computers then on the market and put DEC and its imitators on the
run. Following IBM's then rigid development process, the
specifications for creating this computer grew more complex as each
day passed. By 1985 it became obvious that the development team was
years away from a workable product, and the project was canceled.

so in your statement "Fort Knox was more like 1990" ... are you
referring to that by 1985 (when it was killed), it was realized that
the scope creep(?) in Fort Knox met that nothing would be available
before 1990?

my references was to stuff going on in the early days of when the
project started (circa 1980) ... and not to when it was eventually
might be expected to ship (and, in fact, i don't make any claims
as to knowing when it might have shipped).

the other references imply that Fort Knox was more than just Rochester
... and that it started in the early 80s. Endicott use of 801 for
follow-on to 4341 was abandoned at least by the time 4381 was
announced in 1983. I contributed to white paper that helped Endicott
decide to abandoned the 801 for the 4341 follow-on.

so i don't see what in your post refers to contridicting something in
my post?

"Del Cecchi" <delcecchiofthenorth@gmail.com> writes:
With great trepidation I must correct Lynn. Fort Knox was more like 1990
since the processor was to be Iliad. And AS400 was born from the
wreckage of Fort Knox as a descendent of S/38 and "platform" that allowed
S/36 code to run on S/38 like machine. See the book "Silver Lake" or
anything by Frank Soltis.

P390

timothy.sipples@US.IBM.COM (Timothy Sipples) writes:
I posted an item at The Mainframe Blog (http://mainframe.typepad.com) today
with a discussion of what's currently involved in obtaining a "home
mainframe" (which might be personal or might be shared among a group of
developers). The post might spur some interesting comments, and anyone is
welcome to comment.

This paper describes crypto attacks on the protocols and standards for
financial ATM PIN processing.

The results show an inherent flaw with the way ATM PINs are encrypted
and conveyed on the international financial networks.
One of the most disturbing results is that instead of just having to
trust that your own issuer bank has good control over insider fraud,
every other financial institution in the network must be trusted as
well - an insider at another bank can crypto-crack your ATM PIN if you
withdraw money from any of their ATMs.

for quite some time, the conventional wisdom has been that insiders
are the greatest source of fraud, data breaches, identity theft, etc.

where a PIN represents something you "know" authentication. supposedly
multi-factor authentication is considered more secure because the
different factors are selected to have independent
vulernabilities/expoits i.e. a PIN is countermeasure to lost/stolen
card. ignore for a moment that, in part, because of the proliferation
of something you know authentication ... supposedly something like
30percent of debit cards have the PIN written on them ... lots of past
posts about shared-secret based something you know authentication
http://www.garlic.com/~lynn/subintegrity.html#secrets

for a couple decades there have been exploits/attacks on terminals that
skim authentication information. this collects both the (static data)
magstripe information (that represents something you have
authentication) and pin at the same time (something you know). this
represents a common vulnerability to debit card multi-factor authentication
... negating any assumption about multi-factor authentication being more
secure. compromise of terminals can be either insider or outsider exploit,
although outsider exploits seem to be what makes the news. lots of
past posts about harvesting static, authentication information:
http://www.garlic.com/~lynn/subintegrity.html#harvest

the following has some discussion of PINs and 2984 atm machine done at
the los gatos lab (where for a time i had a nearly one wing of
offices) and the demolition of the los gatos lab
http://www.garlic.com/~lynn/2006q.html#5

"John Coleman" <jcoleman@franciscan.edu> writes:
Before structured programming methods developed, code was often
written in a "spaghetti" style, so called because of the lack of
clear block structure. Goto and other branch statements were used,
causing the program flow to jump from one place to another, which
produced code that was a tangled mess to everyone but the original
author.

part of it was dependencies ... complex/convoluted logic was hard
to modify ... but the complex/convoluted logic could be highly
efficient

in failure analysis and postmortem dumps part of the issue is
reconstructing sequence of events that resulted in the failure.
convoluted/complex spaghetti code ... with lots of GOTOs that converge
on the same place ... could make it difficult to reconstruct the
execution path that resulted in arriving at failure point.

as i've posted a lot in the past ... for assembler code ... frequent
failure mode was unusual code thread that resulted in register use
that hadn't been correctly initialized.

part of the threat analysis was the extensive occurance of skimming
exploits and/or data breaches ... and and attackers (either insiders
or outsiders) being able to utilize the acquired static information in
a form of replay attacks.

part of this effort was recognizing that transaction and account
information is used in dozens of business processes ... and even if
the planet was buried under miles of encryption ... it still wouldn't
be able to prevent transaction/account leakage (whether it involved
insiders or outsiders).

so part of the x9.59 financial standard was to eliminate the
usefullness of transaction/account information to the attackers
... i.e. even with all the transaction/account information they
couldn't use it directly for (replay attack) fraudulent transactions
that go on today; aka x9.59 didn't do anything to eliminate data
breaches (or skimming) ... it just eliminated the usefullness of the
data breaches (and skimming) to the attackers.

"Frank Swarbrick" <Frank.Swarbrick@efirstbank.com> writes:
Some time ago (several years, probably) I had some questions regarding our
TCP/IP vendor and the fact that they would seem to always wait for an ACK
before sending the next packet. I think it was pretty much agreed upon here
that this was fairly non-standard (though not necessarily "forbidden" by any
relevant RFC), and would also result in longer transmission times. The
vendor argued that it is done in the name of 'data integrity'. Others
argued, of course, that even though the receiving TCP acknowledged a packet
that that does not necessarily mean that the packet had been forwarded to
the receiving application, or even if it was if the receiving application
had handled it successfully. I could never seem to convince the vendor,
though.

remember that SNA had been pretty much half-duplex protocol ... which
met continually waiting for the link to turn around. there was even a
document about attempts to implement lu6.2 on top of a tcp/ip layer
and the horrible problems trying to get half-duplex lu6.2 to deal with
asynchronous, full-duplex tcp/ip transmission layer.

the original vm/mvs tcp/ip product was implememted in vs/pascal and
got something like 44kbytes/sec aggregate thruput and consumed a 3090
processor doing it. i had added rfc 1044 support to the product .. and
in testing at cray research between a cray and a 4341-clone ... was
getting 1mbyte/sec aggregate thruput using only a portion of the
(4341-clone) processor.
http://www.garlic.com/~lynn/subnetwork.html#1044

one of the protocols used was NAK and selective-resend for missing
packets and/or packets in error.

that included some number of full-duplex T1 satellite links
... full-duplex asynchronous operation could easily have twenty 4k-byte
packets outstanding simultaneously in both directions on a link. with
approx. 22k miles to orbit ... or 4*22k miles for complete round-trip
... comes out to approx. half-second round-trip latency. half-duplex
synchronous operation would have been two packets/sec thruput ...
while the rate-based pacing and selective resend that was implemented
could drive a full duplex T1 at nearly media speed (about
300kbytes/sec or 75 4k-byte packets/sec).

actually the selective resend didn't do a lot of good early on. all
the links that left corporate facilities (including satellite links)
had to be encrypted (at one point somebody claimed that the internal
network had over half of all the link encryptors in the world).

in any case, early on, we had to settle for some (essentially)
off-the-shelf T1 link encryptors ... when things got too garbled the
link encryptors retrenched to initial synchronization mode ... which
could last long enuf (especially on sat. link) that the software would
think the link had dropped and would automatically recycle
it. fortunately the 15/16 reed-solomon forward error correcting helped
minimize the number of times that happened.

later i was involved in designing an inexpensive crypto board that
minimized the crypto resynchronization problem (that was also targeted
at being able to handle a couple megabytes/sec ... bytes not bits).

note that the possibility would still exist that an application could
fail to pickup a packet even in a half-duplex synchronous scenario ...

if you are looking to deal with that fault scenario ... the
application would need some higher level logic regardless of which
scenario that tcp/ip used (synchronous 1 block at a time, or some
N-blocks outstanding asynchronous). if tcp/ip acks a packet before the
application signals that it has completely processed the packet
... there is still all sorts of opportunities for things to go wrong
... whether it is a single packet outstanding or 100 packets
outstanding.

from above:
The gang targeted freestanding cash dispensers and would tap the phone
line between the ATM and a wall socket by placing a two-way adaptor on
it and connecting an MP3 player, according to the newspaper.

uri writes:
Note that the bad guys here did not take money out of ATMs using the
recorded info (they couldn't bacuase they could not decrypt the PINs),
they just shopped - which is a classic credit card fraud.

i.e. current generation of debit cards (typically carrying association
logo) can be used in either PIN-debit or signature-debit mode. you have
to specially request a card that is PIN-debit only.

for the summer of '69 (between semesters), I was con'ed into being
full-time employee at Boeing, helping setup new dataprocessing for
recently formed BCS (earlier in the spring, I had been con'ed into
teaching a 40hr computer class to BCS technical staff during spring
break).

For a long time, I thought that the Renton datacenter was one of the
largest in the world ... however there was some rumor that NKP may
have been larger (one of the biographies mentions that NKP represented
a $2.5B windfall to IBM).

Anne & Lynn Wheeler <lynn@garlic.com> writes:
For a long time, I thought that the Renton datacenter was one of the
largest in the world ... however there was some rumor that NKP may
have been larger (one of the biographies mentions that NKP represented
a $2.5B windfall to IBM).

while i haven't worked on a mainframe in a long time ... this post mentions
analysing the performance of a mainframe 450k line cobol program.
http://www.garlic.com/~lynn/2006s.html#24 Curiosity: CPU % for COBOL program

where i did all the multiple regression analysis of the COBOL
application activity counters using a PC-based application.

the 14percent wasn't trivial since the application ran in datacenter
across $1.5B of mainframe equipment (primarily sized for this one
application). However, it was 2000 dollars instead of the 1970s dollars
quoted for renton and NKP.

Brian Inglis <Brian.Inglis@SystematicSW.Invalid> writes:
As in (from Wikipedia):
"Nakhon Phanom Royal Thai Air Force Base (NKP) is a Royal Thai Navy
facility. During the Vietnam War it was a front-line base of the
United States Air Force."

THE CONDEMNED
*
WHEN THE EARTH WAS CREATED, THE POWERS ABOVE
GAVE EACH MAN A JOB TO WORK AT AND LOVE.
HE MADE DOCTORS AND LAWYERS AND PLUMBERS AND THEN -
HE MADE CARPENTERS, SINGERS, AND CONFIDENCE MEN.
AND WHEN EACH HAD A JOB TO WORK AS HE SHOULD,
HE LOOKED THEM ALL OVER AND SAW IT WAS GOOD.
*
HE THEN SAT DOWN TO REST FOR A DAY,
WHEN A HORRIBLE GROAN CHANCED TO COME IN HIS WAY.
THE LORD THEN LOOKED DOWN, AND HIS EYES OPENED WIDE -
FOR A MOTLEY COLLECTION OF BUMS STOOD OUTSIDE.
"OH! WHAT CAN THEY WANT?" THE CREATOR ASKED THEN
"HELP US," THEY CRIED OUT, "A JOB FOR US MEN."
"WE HAVE NO PROFESSION," THEY CRIED IN DISMAY,
"AND EVEN THE JAILS HAVE TURNED US AWAY."
SAID THE LORD, "I'VE SEEN MANY THINGS WITHOUT WORTH -
BUT HERE I FIND GATHERED THE SCUM OF THE EARTH!"
*
THE LORD WAS PERPLEXED - THEN HE WAS MAD.
FOR ALL THE JOBS, THERE WAS NONE TO BE HAD!
THEN HE SPAKE ALOUD IN A DEEP, ANGRY TONE ---
"FOR EVER AND EVER YE MONGRELS SHALL ROAM.
YE SHALL FREEZE IN THE SUMMER AND SWEAT WHEN ITS COLD -
YE SHALL WORK ON EQUIPMENT THATS DIRTY AND OLD.
YE SHALL CRAWL UNDER RAISED FLOORS, AND THERE CABLES LAY -
YE SHALL BE CALLED OUT AT MIDNIGHT AND WORK THROUGH THE DAY.
YE SHALL WORK ON ALL HOLIDAYS, AND NOT MAKE YOUR WORTH -
YE SHALL BE BLAMED FOR ALL DOWNTIME THAT OCCURS ON THE EARTH.
YE SHALL WATCH ALL THE GLORY GO TO SOFTWARE AND SALES -
YE SHALL BE BLAMED BY THEM BOTH IF THE SYSTEM THEN FAILS.
YE SHALL BE PAID NOTHING OUT OF SORROW AND TEARS -
YE SHALL BE FOREVER CURSED, AND CALLED FIELD ENGINEERS!"
*

.... snip ...

and then there is this ...

In the first few months of the year that System 360 Operating System
came to a full stop, all signs appeared normal and there was no
indication of an impending disaster. The SDD Manager of Programming
Systems stated at the spring SHARE meeting that the F Level of FORTRAN
V would definitely be implemented and would at least equal the speed
of the E Level FORTRAN V subset, provided it was run on a Model 75 or
greater. There was no truth, he asserted, to the rumor that IBM was
dropping FORTRAN in favor of PL/3. Option 89, or MVC
(multiprogramming with a variable number of CPU's), which had been
released in System Release 101.8 was hailed by a large number of users
as the ultimate in operating systems. Representatives of a major
government agency which had been running a Model 91 with 8 million
bytes using a modified BPS supervisor, lodged a mild protest, but were
shouted down by the majority.

On April 1, an announcement by the Management Information Department
of DPD caused quite a stir. Their Management Action Optimization
(MAO) program would be written using the new Linear Interpretation
Nucleus (LIN), part of DOS extended. This occurred, it was rumored,
in spite of persistent efforts by the Marketing Verification
Department (MVD) to persuade them to use OS. This department is
charged with the "purification" of TYPE II programming standards.

There were indications, however, that something was in the air. The
OS Internals Workshop was extended from 13 weeks to 26 weeks. A
resident psychiatrist was installed to try to cut down on nervous
breakdowns, defections, and AWOL's. A blue letter advised salesmen
that "throughput" and "turnaround time" were not to be used. The
byword was to be "full utilization of system resources." At all
costs, customers were to be discouraged from asking, "But when will my
job be completed?"

Release 91.0 contained a module of the nucleus that stopped the
software clock during system overhead time. Murmurs about the
difference between meter time and time accounted for led to the
removal of all meters and a shift from a 176 hour base to 264 hours
per month. Dissatisfaction was increasing, however; one large
scientific/engineering/commercial customer announced his intention to
switch to a competitor, but after two years was unable to do so
because he was unable to discover exactly what his system was doing.

The end finally came in mid-October. System Release 110.7 was
distributed, which converted everyone to MPSS (Multiple Priority
Scheduling System), which combined the following control program
options:

Multiprogramming with a Valuable Number of Tasks
Multijob Initiation
Multiple Priority Secection
Multiprocessing with a Variable Number of CPU's

SYSGEN was accomplished with little difficulty in 504 system hours.
Expectantly, customers IPLed and initiated their job streams.

Nothing Happened
Nothing.

When it slowly dawned on everyone that nothing was going to happen,
now or later, a flood of anguished telephone calls swamped the branch
offices. At Poughkeepsie, in turn, all extensions, all twenty-five
thousand of them, were busy. Unauthorized vehicles were turned away
at the entrance roads. The Director of Programming Systems could not
be found.

At last a brave customer engineer fought his way through the crowd
around his system and obtained a dump. As he scanned the hex, the
horrible truth came home to him. All of core, as far as the eye could
see, was filled with control blocks, each containing pointers to other
control blocks. DADSM was allocating and suballocating, searching
DSCB's and building new ones. Job Management was initiating new jobs,
task management was creating tasks and ATTACHING and LINKING, data
management was opening data sets, and building WTG tables, DCB's,
DEB's, ECB's, and IOB's. It was finding TIOT's from tasks dispatched
by task management, which pointed to JFCB's. But no programs were
being executed. No data was being read or written or processed.
Operating System had taken over all the system resources and was
entirely occupied with issuing supervisor calls, saving registers,
restoring registers, chaining forwards and backwards, and following
pointers all over core. Every pointer led to some other pointer.
Operating System, after several years of effort by thousands of
programmers, had finally become a completely closed system.

The great dilemma was solved only through the intervention of the
Chairman of the Board, who personally issued a black_boardered Blue
Letter announcing the withdrawal of Operating System. A large bonfire
was built in the Poughkeepsie parking lot in which a huge mountain of
OS documentation was burned, while the local high-school band played a
funeral dirge. Users all over the world wearily set to the task of
rewriting using the BPS assemble. A new programming system was
announced for delivery in two years, to be called Assembler Stacked
Support (ASS). And everyone breathed a great sigh of relief and
became happy for a time.

OS IS NOT TO REASON WHY . . .
(To the dune of "Everglades" and "The MTA", and with apologies to the
Kingston Trio, which does not necessarily include IBM.)
(sparkling guitar intro)
I. I was born and raised around Poughkeepsie, A programmer is what I had
to be; But IBM and its programming team Have turned me into a
debugging machine.
Running all my jobs under MVT.
CHORUS: Where a job can run and never be found, And all you see are
discs goin' 'round; And when you get your output the results are nil:
If the JCL don't get you then the systems will.
II. I put my job in the input queue. And watched in awe as the system
blew. When I reran the job, I felt really crushed; I saw on my
listing: INPUT STREAM DATA FLUSHED.
Running all my jobs under MVT.
CHORUS
III. I reran the job and ran out of space;
I reran the job with a step out of place;
I reran the job with priority 10 ... (pause)
"Will it ever return, no, it'll never return, And its fate is still
unlearned, It may hide forever in SYS1.LINKLIB..."
(pause)
Running all my jobs under MVT.
CHORUS
IV. Well, I couldn't get a job past the JCL hump,
So I never got a chance to read an ABEND dump,
If I could get one through, I'd have debugging fun,
'Cause the job was in the language known as PL/I.
CHORUS
Getting lots of grief from this MVT.
Running line a thief 'way from MVT.
Getting round this mess via DOS.
(rousing guitar finale)

Why these original FORTRAN quirks?

"Terence" <tbwright@cantv.net> writes:
"Relocating code" is what any linker does with your Fortran subroutine
and function object code. It's very easy to dump a module and work out
how the compiler stores the resulting mixture of code, entry points and
segmentation information. I always thought taking things apart help
learn hjow to put stuff together.

the issue in the os/360 paradigm (which has been carried on thru
today) is that the relocatable objects are called *relocatable address
constants* which tend to be liberally distributed thru the executable
image. the loader/linker function requires fetching the executable
image into the address space and then running thru that image,
swizzling the relocatable address constants to correspond to the
address the image was loaded.

... all the pages mapped to virtual address space that contained
relocatable address constants first had to be prefetched and modified
before execution could even begin ... then the operating system then
had the additional overhead of dealing with the modified pages
... even tho the executable image was nominally otherwise unchange.

the next issue was attempting to treat page mapped executable images
as shared, read-only code ... the same (physical) copy appearing
simulataneously in different address space ... possibly at unique
address in each address space. unless there is some quantum effects
where a relocable address constant can simultaneously take on a large
number of different values ... and the appropriate value is
automagically provided for the address space that is currently
executing ... then it broke down.

lots of past posts about dealing with problem of relocatable address
constants in a shared segment (same physical copy in different address
space ... at potentially different addresses) paradigm.
http://www.garlic.com/~lynn/submain.html#adcon

basically, i had to rewrite the code to remove use of relocatable
address constant in programs targeted for such shared segments. other
infrastructures that supported such operations out of the box, would
have collected all the relocable address constants in a different
control structure ... not part of the executable image. each address
space would have its own unique instruction address and set of
registers. one of the registers would be used to address the
control structure containing the appropriately adjusted relocable
address constants.

my wife had co-authored AWP39peer-to-peer networking in the early
days of SNA ... which was oriented towards managing huge numbers of
dumb terminals. she took a lot of grief .. and in fact, in most of the
world, peer-to-peer is implicit in the term networking. SNA had
co-opted networking to mean massive terminal communication environment
... so it became a requirement that when dealing with SNA oriented
operations ... to prefix networking with peer-to-peer when talking
about regular networking (as opposed to SNA).

later she was con'ed into going to POK to be in charge of
loosely-coupled (mainframe) architecture. while there she originated
peer-coupled shared data architecture (again notice the requirement
for the term peer-to-peer when dealing with an SNA entrenched
environment ... where in the rest of the world, peer-to-peer would be
implicit).
http://www.garlic.com/~lynn/submain.html#shareddata

she didn't last long and left ... peer-coupled shared data not
getting a lot of uptake (except for IMS hot-standby) until parallel
sysplex came along.

i do claim that one of the reasons behind relatively rapid ibm/pc
uptake was business customers being able to get dumb (3270) terminal
emulation on ibm/pc. for about same cost as a 3270, an ibm/pc provided
a single desk footprint with both dumb terminal emulation and some
amount of local computing capability.
http://www.garlic.com/~lynn/subnetwork.html#emulation

however, in the later 80s, real client/server with PCs was starting to
emerge. and we had come up with 3-tier architecture and were out
making customer executive presentations. the mainstream was pushing
something called SAA ... which could be construed as trying to cram
the client/server genie back into the bottle (and return to dumb
terminal emulation). we took a lot of grief from the SAA crowd for
being out there pushing 3-tier architecture.
http://www.garlic.com/~lynn/subnetwork.html#3tier

Anne & Lynn Wheeler <lynn@garlic.com> writes:
note that it was external visibility like this that got hsdt into
political problems with the communication group ... in part, because
there was such a huge gap between what they were doing and what we
were doing.
http://www.garlic.com/~lynn/subnetwork.html#hsdt

re: hsdt; For the past year, we have been working with Bob Shahan &
NSF to define joint-study with NSF for backbone on the
super-computers. There have been several meetings in Milford with ACIS
general manager (xxxxx) and the director of development (xxxxx). We
have also had a couple of meetings with the director of NSF.

Just recently we had a meeting with Ken King (from Cornell) and xxxxx
(from ACIS) to go over the details of who writes the joint study. ACIS
has also just brought on a new person to be assigned to this activity
(xxxxx). After reviewing some of the project details, King asked for a
meeting with 15-20 universities and labs. around the country to
discuss various joint-studies and the application of the technology to
several high-speed data transport related projects. That meeting is
scheduled to be held in June to discuss numerous university &/or NSF
related communication projects and the applicability of joint studies
with the IBM HSDT project.

I'm a little afraid that the June meeting might turn into a 3-ring
circus with so many people & different potential applications in one
meeting (who are also possibly being exposed to the technology &
concepts for the 1st time). I'm going to try and have some smaller
meetings with specific universities (prior to the big get together in
June) and attempt to iron out some details beforehand (to minimize the
confusion in the June meeting).

part of the visibility thing was that not too long after the above ...
we found that somebody in the company was calling up and canceling
meetings that we had scheduled with outside parties. somewhat related
http://www.garlic.com/~lynn/2006u.html#55 What's a mainframe

with such concerted opposition mounting inside the company, there
wasn't a whole lot left we could do to continue with HSDT project.

Pedantry (was RE: Shane's antipodes)

Zani, Alaimo , Bruno wrote:
As for a citizen of the UK, he or she is normally referred to as
"Inglese" or "Britannico", interchangeably. For more accuracy, one can
also use "Scozzese" (from Scotland, "Gallese' (from Wales),
"Nord-Irlandese', (from Nord-Ireland), "Londinese" (from London), etc...

about a year ago, we visited Edinburgh.

Shortly before the visit, we had bought a BBC DVD on the history of
Great Britian. The BBC show supposedly wasn't very popular in England
because it supposedly told much of the history as it actually was
... rather than how it was rewritten to make England look better. they
listed some number of the slaughters that England did on the scotts
... wiping out whole areas (similar stuff in Ireland).

the Edinburgh castle military museums seem to have taken the cleaned up
English history.

the BBC show had the English wiping out whole areas, taking all the
land and leaving most of the remaining people w/o property ... and
about the only occupation left for males was various branches of the
military or immigration. the military museum didn't mention any of that
... just that all the young men signed up for the military because they
were so patriotic and brave.

the Edinburgh castle military museum listed the men killed at Gillipoli
but not much else. the BBC show made much about how the English
commanders were so incompetent and managed to kill huge numbers of
irish, scotts, australians, etc. there was some mention that possibly
one reason that Ireland tried to stay neutral in ww2 was the slaughters
perpetrated on the Irish by British commanders in ww1 at places like
gillipoli

after we got back, I was watching an old (ww1 flavor) Black Adder show
... one of (Mr Beans) lines was ... when we see an man in a skirt, we
run him through and nik his land.

jmfbahciv writes:
And didn't IBM develop a philosophy of nailing down physical
addresses before anything can get started? This makes sense
if your sysetm is a data processing production system. You do
not want to start a job that needed a resource which doesn't exist
before the job is started.

I'm not saying this is "wrong". I'm saying that it's a different
approach that has different side effects than waiting until
the second the resource is needed to provide it.

My background is operating systems that had the "just in time"
philosophy about resources; this deeply affects when relocation
happens to code.

os/360 linker/loader resolved (relocatable address constant) addresses
at the time the application was loaded (originally "real" addresses,
but later on, applications were executed in virtual address space
rather than real).

my complaints with the os/360 convention wasn't particularly the
resolving ... it was that the convention had the relocation address
constants distributed thruout the executable image. that caused
problems when i did paged mapped filesystem for cms in the early 70s,
it wasn't possible to just page map the executable image and then
start execution ... the executable image had to be page mapped and
then the linker/loader had to run thru (essentially) random locations
in the executable image .... swizzling the relocatable address
constants. this requires that some (nearly random) number of pages
have to be prefetched by the link/loader and modified (before
application execution can be begin). misc. past posts about
doing page mapped filesystem support for cms
http://www.garlic.com/~lynn/submain.html#mmap

furthermore, it made impossible the page mapping of the same
executable image into different virtual address spaces at different
virtual addresses. misc. past post about trying to support page
mapping the same executable image into different virtual address
spaces at potentially different virtual addresses
http://www.garlic.com/~lynn/submain.html#adcon

other aspects of os/360 was that physical disk storage tended to be
pre-allocated before the application started execution. on the other
hand, from the start, the cms filesystem would dynamically each
physical disk record as it was required. os/360 filesystem tended to
get very good file locality ... while the cms mechanism could result
in nearly random location for a file's physical records. when i did
the cms filesystem enhancement for page mapping, i also added some
semantics for supporting a degree of contiguous allocation.

for lots of other drift .... old email discussing r/o protection of
shared images ... in the following "BNR" refers to Bell Northern
Research

This is discussing "new" method for "protecting" shared segments
introduced with release 3 of vm370. Originally, CMS had been
reorganized to take advantage of the 370 "segment protect" feature.
when that feature was dropped from 370 (as part of helping 370/165
meet schedule for retrofitting virtual memory), vm370 had to revert to
the key-protect games used by cp67 for shared page protection.

The vm370 release 3 changes allowed, eliminated protection. instead,
there was a page scan whenever there was task switch ... to check if
the previous task had modified any shared pages. if a shared page had
been found to be modified, it was discarded (and any subsequent
reference would cause a page fault and retrieve of an unmodified copy
from disk).

with multiprocessor support, they then had to add a unique copy
of shared pages for every running processor.

the email also mentions performance enhancement to not bother with
protection ... allowing any application in one address space to
corrupt (shared) executable images potentially in use by large number
of other applications.

FYI; BNR has done some testing on what the release 3 changed page
checking is costing them. A 4-5 segment program moved into a
discontiquous shared system was requiring about 10% more CPU to
complete than running as a module (the changed page checking more than
offset any reduction in CPU gained by not doing as much paging and not
doing the I/O to load the module). The release 6 not checking code
appears to worse than useless for most users (who ever thot of it in
the 1st place). Most of the people who are in the tightest CPU bind
also have the most users (possibly 200+ plus), it wouldn't take very
many incidents of accidental, unpredictable segment modification to
completely blow away all possible thru-put gains of using it. Maybe
that could be minimized by having a timer driven routine which would
run around once every 2-3 minutes checking for any changed pages
(which were not be checked for) and abend CP (SVC 0). That would bring
the system down and back-up again clean with a minimum of service
disruption to the users. The other alternative is for 15-30 minutes to
elapsed where everybody got the fealing that something was wrong, but
nobody could quite fiqure out what. Hopefully a system programmer
would be along to get them out of the mess (also PTR could even abend
faster if a shared, changed page was ever selected). This also
eliminates all the nasty buGs associated with DMKVMA and
unsharing. You get a nice, clean abend that nobody has to worry about
fixing (could even change CP so that it even bypasses taking a dump).

I haven't heard of anybody yet who is planning on using the release 6
feature to save the overhead of checking for changed pages in
CMS. Maybe some installation that doesn't have very many people
dependent on CMS (Of course in that case why are they worrying about
the performance?).

The first problem with taking an existing application and setting it
up for common, shared executable across multiple virtual address
spaces involved making sure that the executable image would work in
read-only, protected storage. A lot of the applications from the
period tended to have a lot of temporary working storage spread
through-out the program. Portions of the code typically had to be
rewritten to convert it for use in read-only, protected execution.

I had originally done the CMS paged mapped filesystem support and the
shared execution support in the early 70s. A very small subset of that
support was included in release 3 under the label "DCSS" (or
discontiguous shared segments).

but also modifying existing application code that made used of
internal working storage. Recent post that has quite a bit
of RED editor discussion
http://www.garlic.com/~lynn/2006u.html#26 Assembler question

included an email from
3nov78
that makes a passing reference to
modifying RED editor for execution in shared, protected storage.

this refers to RED having the necessary modifications between 3nov78
and 23may79. the "re-genmoding" (in the following email) refers to the
CMS command with the modifications that supports automatically
invoking the shared execution invokation from paged mapped filesystem.

just by loading and re-genmoding them. RED and NED have been regen'ed
with just segment 2 shared (RED has another 5-6 pages and NED has
another 2-3 pages). APL was gen'ed with segments 2, 3, and 4 shared
(it has another 1 or 2 pages). DSM was gen'ed with segments 2 and 3
shared (it has another 14-15 pages, it would be worthwhile to obtain
the original DSM text decks and dummy the ending module location up to
the next segment boundary so that segment 4 could also be shared).
EDGAR is just slightly under 16 pages. It will also be necessary to
obtain the original text for it so that its ending address is rounded
up to a segment boundary so that it will have a shared segment. That
takes care of the immediate modules that I know about that can be
shared.

--

also re: APL; the APL we are running is 2.1 from PID. Talking to the
science centers it was suggested that we obtain LA's APL for our
production system. Among its other enhancements, their comment was
that at least with LA's APL we would receive support as compared to
the PID version (I assume they mean to imply PID version isn't
answering and/or correcting any bug reports).