jcmorris@mitre.org (Joe Morris) writes:
>A minor question that's been on my mind for several years: was the
>official name of the system "Hands-On NEtwork"? I've seen this given
>as the origin of the acronym HONE but never from an official source.

Hands-On Networking Environment
^ ^ ^ ^

>Another nagging question: why APL, both in the sense of "why at all"
>and (especially) why "total dependency?" Was there a directive that
>only APL was to be used, or was it just that at the time APL, despite
>its resource consumption, was still seen as the best available tool?

the padded cell environment was first built in APL and lots of the
delivery tools were written in APL. Way back when ... in the origin
(some memory fade) ... cp/67 services were put up in some of the
regions for field people support ... including being able to run some
370 testing (aka they were provided with a special modified version of
cp/67 that emulated virtual 370 rather than virtual 360, "cp67h"). This
was part of the effort that had a modified cp/67 (cp67i) that would
run on a real 370 hardware with relocate support a year before the
hardware was built:
http://www.garlic.com/~lynn/2002h.html#50 crossreferenced program code listings

Some applications were written in APL and I believe the group doing
some of the applications had chosen APL and they just grew. It
eventually became a whole "padded-cell" envrionment (sequoia). This
then drove a lot of the applications ... included some "foreign"
applications that were recoded in fortran because of extreme
performance issues. I believe that as the body of APL applications
grew, it just built up a sort of legacy momentum. One of the extensive
uses of APL at the time were the various what-if things that you see
done in spreadsheets today.
>And some customer shops were given access to the configurator programs,
>although (obviously) not to the ordering tools. I recall using them,
>for example, to develop a viable configuration for our 3725, a task
>made more difficult than it should have been because we got no
>documentation at all except for the online help -- and this over a
>2400 bps (+/-) dialup circuit.

"Russell P. Holsclaw" writes:
After the outbreak of the HIV virus, these tools went by another name,
but I don't recall what it was. ISTR that the impetus for renaming
them came about as the result of some embarrassment in which the
authors of many of these tools were brought together for some sort of
awards banquet at a hotel in San Francisco. The upshot of this was
that many programmers were turned loose on the streets of San
Francisco wearing baseball caps that said "AIDS" on the front. This
created a minor disturbance.

one of my favorite stories of renaming was the santa teresa
lab. Normally labs will get the name of the local post office/town
(aka like Meyers Corner). I was on vacation in Wash DC the week before
the santa teresa lab was to be dedicated and there was a group from
san francisco demonstrating on the steps of the capital; these were
some professional ladies that had a group name the same as the closest
post office to the santa teresa lab. In a very short time, a new name
was selected.

jcmorris@mitre.org (Joe Morris) writes:
Another nagging question: why APL, both in the sense of "why at all"
and (especially) why "total dependency?" Was there a directive that
only APL was to be used, or was it just that at the time APL, despite
its resource consumption, was still seen as the best available tool?

another even earlier kickstart for APL .... was that some group was
supporting the business people in armonk and dp hdqtrs (1133). They
were writing apl models that were allowing these people to do various
what-if & other kinds of business analysis.

this was when cms\apl was first created (port from phili's apl\360 to
cms environment and making available up to 16mbyte workspaces instead
of just 16k or 32k workspaces). there was an enhanced sense of
security when all the most valuable corporate data was loaded onto the
cambridge system so these guys could run their financial & business
analysis applications. eventually these guys got their own cp/67
systems down and new york. random past ref:
http://www.garlic.com/~lynn/2002h.html#34 Computers in Science Fiction

ok, didn't found a HONE reference card (yet) ... but digging thru some
old stuff did find an Aids Reference card.

also, somebody pointed out that my prior use of Myers Corner as an
exmaple was a poor choice ... it was the intersection not a post
office.

Note in the attached, consolidation of the (US) regional HONE systems
to California didn't occur until 1977. At this time in 1973 there HONE
regional centers in Los Angeles, Chicago, Washington DC, and New York.

This is also before Sequoia and the automatic "padded-cell" (aka
people logging in found themselves automatically placed in the
Sequoia padded-cell (didn't have to "IPL CMS", start APL, etc).

well the previous post got subject to all sorts of delays and other
things ... apparently because of the four letters that appeared
frequently in the post.

i've just about run out of places to look for the HONE reference card.

Did find

a reference card for VS/APL applications that customers could
order dated 1979.

a 3270 fullscreen editor user's guide printed 5/18/77 at HONE1 on the
backside of greenbar paper. There were a number of 3270 editors
developed internally around the company for CMS during the '70s
... this particular one was RED (for raleigh editor).

a printout done at HONE1 on 4/15/77 of the internal network ... also
on the backside of greenbar paper. This is in two parts, 1)
"graphical", little boxes with the node name and lines connecting to
other little boxes and 2)actual node list giving nodid, "index" (in
the graphical ... majority of nodes didn't show in the graphical),
location, (machine) model, type of system, and operator/contact phone
number. Slightly different format to one used later:
http://www.garlic.com/~lynn/99.html#112

The (US) HONE consolidation process had just started at this point and
would grow extremely rapidly over the next two years.

There were the DEMO centers in the list ... which had slipped my mind.
In addition to HONE operation for things like configurators and ****,
there were also education center machines and "DEMO" machines in each
of the regions (for use by people in the respective regional and
branch offices). HONE had started trying to shutdown all general use
of CMS and just restrict things to the padded-cell environment. The
"DEMO" machines provided more of a native CMS time-sharing capability
to the people in the branch offices.
DEMO1 Boston
DEMO2 New York
DEMO3 New York
DEMO4 Philadelphia
DEMO5 Washington, DC
DEMO6 Atlanta
DEMO7 Cincinnati
DEMO8 Detroit
DEMO9 Chicago
DEMO10 Minneapolis
DEMO11 St. Louis
DEMO12 Dallas
DEMO13 San Francisco
DEMO14 Los Angeles

Somewhat aside, Peterlee (science center) was also on the network
list. One of the people at Peterlee had written an email client called
VMSG. At about internal development release 0.6, somebody in the PROFS
group had co-opted the (limited function) source to use as the core of
PROFS implementation. After that, the VMSG source distribution was
limited to John, one other person and me.

Mike Hammock writes:
With all the talk about HONE and "market aids", I had to get my
$.02 (USD) worth in here... About the time of the consolidation
Lynn mentions below, the IBM division I was in at the time, GSD, was
positioning itself as a almost totally independent business unit in case
of a goverment or business induced splitting apart of IBM. Part of this
inluded us 'picking up' the HONE system and running a copy of it,
with specific GSD configurators, on our own systems in Atlanta.
We started off with a 370/145 running VM/370 Rel 3. (We actually
started with Rel 2, but went to 3 before going 'production'.)
I'm pushing my memory here, but I believe that the 145 has some very
effective microcode assist for VM. However, turns that the use of
this microcode was mutually exclusive with some of the performance
assists in Sequoia so we could choose one or the other untill some
changes were made. We had to limit the concurrent logged on
users to 10 for a while... But considering that was a 512K
system doing all that APL work, even that it pretty amazing!!
Oh yes, the name of the system was MAS: "Market Aids System".

cambridge science center had done the apl\360 to cms\apl port
... including the rewrite of garbage collection for (large) virtual
memory (aka running to the end of a 32kbyte workspace in real stroage
wasn't bad ... but always running thru a 16mbyte (virtual) workspace
had all sorts of downside effects on paging). cambridge also put in
functions for making system calls ... which got lots of people upset
as violating apl purity.

palo alto science center then did apl\cms and the 145 apl microcode
(145 w/apl m'code had apl thruput compareable to 168). The system call
violation was removed and replaced with the "shared variable"
construct (that was used to implement access to system functions).
Since Sequoia was becoming a system unto itself .... Sequoia was
getting a lot of performance optimization.

My memory is hazy here, i was at some of the sequoia optimization
meetings ... and there were things like adding custom assembler
sequences to the APL supervisor specific for enhancing sequoia thruput
... which would have perturbed the ability for using the apl m'code.
HONE was running clusters of 168s upgraded to clusters of 168SMP and
tweaking wasn't done with thot of 145 apl m'code compatibility
(although HONE looked at if there was any way of leveraging farms of
145s for any of their workload).

I'm pretty sure the Sequoia/configurator group stayed in LA when the
HONE consolidation took place. In any case there were meetings between
the Sequoia group and the guy at PASC that was responsible for 145 apl
m'code (he later was also responsible for much of the high performance
fortran hx work).

There was a separate significant m'code assist for 138/148 called ECPS
that dropped a lot of the cp kernel (pathlength) into m'code ... but
that would have had little effect on CPU utilization that was nearly
all APL (and a majority of that was the apl Sequoia application). ecps
reference:
http://www.garlic.com/~lynn/94.html#21 ECPS VM microcode aasist

in the above, there were two methods used for identifying kernel
hotspots, the kernel call trace and special microcode 145 load that
did instruction address sampling. The person that was responsible for
the apl m'code also implemented the m'code address sampler.

jcmorris@mitre.org (Joe Morris) writes:
<chuckle> I presume that there was a nearby post offic3 named Coyote?

95013

old 101, monterey highway ... just north of the bailey T (i.e. bailey
runs west from old 101 crosses santa teresa, past STL lab and up into
the hills to the calero reservoir).
http://claraweb.co.santa-clara.ca.us/parks/prkpages/calero.htm

"Clean" CISC (was Re: McKinley Cometh...)

"Stephen Fuld" writes:
All true, but I want to add another determinant. Back then, the speed of
main memory was very slow and there weren't any caches. So it made sense to
have a small amount of very fast memory (usually ROM, but occasionally
writable) in the processor that could hold "instructions" that could be
fetched in one cycle. This was called microcode. Being able to fetch one
instruction from main memory that caused the execution of N actual processor
operations (micro- operations if you will) was a performancee win. Once
main memory get faster and the technology of caches became available, the
advantage of that dedicated memory disappeared and it made sense to
"execute" the micro-operations directly - thus RISC. There were several
technology factors that lead to the "risc revolution".

actually i've claimed that probably the first "big" project applying
RISC had the objective of replacing all the (smaller) corporate
micro-engines ("every S/360 and S/370 compatible processor except the
S/370 Model 75, is microcoded") with 801s (and had fairly good-size
staff-up). after that got killed, you started seeing 801 engineers at
other companies. Skill-base might also be considered a contributing
factor.

associated with activities for the low/mid range 370s also had
projects looking at JIT-like activities for 370 code. this was 20+
years ago. I got contacted, in part because I had done a PLI program
that analyzed and attempted translation/restructure of 360/370 code
nearly 10 years earlier.

"Clean" CISC (was Re: McKinley Cometh...)

"Stephen Fuld" writes:
It has been far too long, but can't you use a Translate and Test instruction
to do the equivalent (copy until a zero byte is found), at least for up to
256 bytes per instruction?

"Clean" CISC (was Re: McKinley Cometh...)

"Stephen Fuld" writes:
main memory get faster and the technology of caches became available, the
advantage of that dedicated memory disappeared and it made sense to
"execute" the micro-operations directly - thus RISC. There were several
technology factors that lead to the "risc revolution".

and right up their with (hardware) skill base was compiler & operating
system technology. one of the reasons for (nearly) every 360/370 being
m'code was to have broad range of hardware implementations and
capabilities while preserving programming compatibility.

riscs moved into a market segment that had a large software portable
technology base (unix & c) that was working hard on being hardware
architecture agnostic.

The merchant supplies N, R, and E( R xor M, E(N,K) ) to a black box
provided by the credit card company which knows K.

The user's program and the merchant's black box both calculate some
function f( N, R, E(N,K), M ) which is used as the secret key for
communications between the user and the merchant. The user's program does
this by being given the user's password, which contains E(N,K); the black
box does not require E(N,K) because it contains K; thus, disassembling the
program given to allow people to use their credit cards on the Internet
doesn't make the system insecure (only cracking open one of the black
boxes does).

there were a couple requirements laid down in the x9a10 standards
commitee for x9.59 ... 1) was preserve the integrity of the
financial infrastructure for all electronic retail payment
transactions (agnostic with respect to kind of payment and environment
of payment) with only authentication and 2) perform the payment in a
single round-trip.

One of the issues for end-to-end authentication is that the
information hiding techniques tending to only be applicable for very
transient periods during transmission while the data continues to
remain vulnerable the rest of the time (lots of stories about fraud as
the result of leakage of merchant transaction log files)
http://www.garlic.com/~lynn/subintegrity.html#fraud

basically x9.59 follows somewhat an existing iso8583 message
.... except it adds a digital signature and a couple other fields
... and sends it off. The purchase could be traditional web browsing
and sending off a request with an x9.59 payment message. The purchase
could also be from a cdrom and sending off a order in a email with a
x9.59 payment message attached. The puchase could also be at POS and a
x9.59 message signed with a smartcard. There is no requirement for any
real-time protocol chatter.

x9.59 defines a business rule that account numbers used in x9.59
authenticated transactions should not be valid in non-authenticated
transactions. effectively in the non-authenticated payment
transactions, the account number (& other data) exists in a large
number of places and any leakage of that information can result in
fraudulent transactions .... aka the information needs to be treated
as shared-secret ... since just knowing the information can enable
fraudulent transactions. Furthermore, any use of encryption for
information hiding tends to be while data is in flight ... not
typically while at rest (creating a large number of vulnerability
opportunities ... both for outside attacks as well as for insiders).

The merchant is also taken out of the loop of having to protect the
consumer's account number and related information
http://www.garlic.com/~lynn/2001h.html#61 Net banking, is it safe???
(or security proportional to risk)

Rather than transient hiding of the vulnerable information while in
transit, x9.59 defines end-to-end authenticated transaction where
the party (financial institution) responsible for authorizing and
executing the transaction .... is also performing the authentication
(it also eliminates a lot of widely distributed data as fraud
vulnerabilities).

Part of the x9.59 issue was that the current infrastructure using SSL
for information hiding while data was in motion on the internet
addressed only a small piece of the business vulnerabilities (and
fraud opportunities) involved.

"Stephen Fuld" writes:
Not quite. Firewire and USB at at a different level of the protocol than
SCSI or ATA. While both of the latter do define a physical interface, SCSI
has morphed into more of a high level interface that can run on multiple
underlying interfaces. (Originally SCSI was both, but since SCSI 3, the
upper layers are defined independently from the lower ones, and parallel
cables is just one physical medium supported, along with various serial
schemes such as Fibre Channel.) So when you attach a disk to USB, you are
running SCSI over USB. For example, there is no definition in USB that
specifies how to send a disk block adddress to a disk. That is provided at
the SCSI level. ATA has historically provided both the physical and the
upper layers, but that is changing more toward the newer SCSI model with the
two separate.

One certainly could define a way to run ATA over USB or firewire, but
without some notion of disk commands, just USB or Firewire by themselves are
insufficient.

9333 from the late '80s was effectively SCSI commands over simplex
serial copper (similar to FCS but copper instead of fiber-optics).
That morphed into SSA standard. I would have preferred that 9333 had
morphed into serial copper with some FCS compatability ... aka being
able to plug SSA cables into FCS switches (operating at fractional FCS
speeds).
http://www.garlic.com/~lynn/95.html#13 SSA

Ever inflicted revenge on hardware ?

"Rupert Pigott" <dark.try-eating-this.b00ng@btinternet.com> writes:
Actually I did give in to the urge to inflict a gruesome
revenge on a piece of hardware that let me down... My expensive
monitor failed just out of warranty (good work whoever designed
it). The sad thing is, one I bought 7 years earlier for 1/4 the
price is STILL working and has had for more use anyways.

the 1052-7 (operator's) console on the 360/67 in cambridge would
periodically get a fist sized indention in the keyboard. they kept a
spare around so the 1052-7 could be easily swapped. I was only
responsible for one of the indentions.

one of the frustrating failure modes was that the end of the last sheet
would feed past the paper sense finger ... but not far enuf so that it
would completely feed out and you would realize it was out of
paper. the only indication was that the it just stopped
working. hitting it hard would sometimes joggle the paper enuf that it
would slip further and then you would realize that it was out of
paper.

ctill@nc.rr.com (Chuck Till) writes:
Sounds like PR fluff in retrospect. It didn't take much to
double CDC's data service business, and I wonder how many
SBC customers in 1969 were still using CDC/SBC a few years
later.

i don't know the circumstances ... but ibm real estate retained the
SBC building out in burlington mall ... and the vm/370 group moved out
there when the outgrew the space on 3rd floor 545 tech. sq. the cp/67
group had split off from CSC and sometime in the time-frame of
starting the morph of cp/67 to vm/370 they started rapid growth
... including essentially absorbing the (ibm) boston programming
center on the 3rd floor (bpc had maybe 3/4ths of the 3rd floor, csc
was about 2/3rds of the 4th floor with the csc machine room taking
about 1/2 the 2nd floor).

the boston programming center was responsible for cps (conversational
programming system ... interpreted pl/i ran under os/360 ... and there
was special microcode available for 360/50 that made it run faster).
Jean Sammet, Nat Rochester, couple others had their offices in the
boston programming center (so the vm/370 group sort of absorbed them
also). When vm/370 group moved out to sbc bldg. in burlington mall
they stayed in 545 tech. sq.

Symmetric-Key Credit Card Protocol on Web Site

jsavard@ecn.ab.ca () writes:
Certainly, one could print a second account number on each credit card for
use with electronic transactions, or just make the password thirty
characters long. (But if one did _that_, then the password by itself,
mailed in one envelope, contains all the information needed for an
electronic transaction... unless, of course, the account number on the
credit card has to get sent after encryption is initiated.)

sorry, i somewhat got carried away ... from the original e-commerce
stuff ... the business fraud problem IS the treatment of the account
number as a shared-secret even tho it has to be available in
clear-text in a large number of places (there are large number of
places where the account number has to be used in the clear and as a
result it can leak out ... more like a giant sieve with encryption
only covering up a few of the openings).

as an aside there is already support for mapping multiple different
numbers to the same account ... even with multiple different tokens
(magstripe or otherwise) ... aka spouse cards, etc. also from
electronic perspective ... the typical magstripe carries room for five
different account numbers (when you put a debit/atm card in an atm
machine ... the selection of which account you want to use can be
driven off the account number data on the magstripe).

x9.59 was to enable transition from a shared-secret paradigm to a
public key end-to-end authenticated transaction paradigm (possibly
deployed first in e-commerce with alternate account number)
.... eliminating knowing the account number as a fraud vulnerability
(and the necessity for information hiding).

In effect the current payment cards are two different devices ... the
physical embossed plastic that can be used in paper transactions and
the magnetic stripe for electronic transactions; these two different
devices just happened to be carried on the same card. It is possible
to package a chip for x9.59 public key operations on the smae or
different plastic. If packaged in the same plastic ... you could treat
it has three different logical devices (in the same physical
housing). There is nothing that would mandate that the chip's account
number be the same as the magstripe's account number or the plastic
embossed account number. The "correct" account number could then be
chosen based on what transaction paradigm is used ... and then apply
appropriate risk & fraud rules.

It wouldn't be necessary for somebody to write the alternate account
number on the plastic (assuming single physical housing) ... the chip
just supplies the correct value (in much the same way the magstripe
can supply up to five different account numbers w/o any of them having
to be written on the card).

ITF on IBM 360

Jonathan Griffitts writes:
Back in the early 1970s I used an obscure IBM product that I've never
heard of since. It was called "ITF" for "Interactive Terminal
Facility", ran on IBM 360. It provided time-sharing access using
interpreted BASIC and a PL/I subset called IPLI.

I don't remember the name ITF ... although it may be lost in memory
someplace. The ibm boston programming center (3rd floor 545 tech sq)
had done something called CPS (conversational programming system) that
matches that description. There was also a 360/50 microcode
accelerator for CPS. It is possible that CPS was renamed(?) ITF at
some point (or CPS was also called ITF)?

gah@ugcs.caltech.edu (glen herrmannsfeldt) writes:
I don't know how much is in assembler. I would hope that PL/S would
be high enough level that such programs could be ported without
completely rewriting them. Probably it would be hard, but then
that might have been said of MacOS and VMS before they did it.

there was jokes (or at used to be) that pl/s, pl/x, etc ... was hardly
more than structured assembler.

amdahl gave a talk at mit in the early '70s ('72?) about the business
plan for clone mainframes .... saying that there were something like
$100b worth of assembler software or executables that no longer had
source ... that wouldn't get replaced at least before the end of the
century (so there would be plenty of market for his mainframes,
regardless of what ibm might do). also during the talk there were some
very pointed questions about the amount of foreign investment he had
taken.

i would expect that the amount/value of such software has grown many
times since then ... with possibly only small effect by all the y2k
work.

... however this has all been looked at several times before; during
FS in general (<25 years ago) ... and during Fort Knox specifically
with respect to 801 (20 or so years ago) ... and as already noted
... been accomplished for as/400.

basic smart card PKI development questions

john.veldhuis writes:
The only thing that browsers directly support, using CSP or PKCS#11, is
the Client Authentication part of an SSL connection.
So you'll need some kind of plug-in. This might be a transaction client
plug-in, which could digitally sign whatever has been posted.

some example code for ec/dsa (fips186-2) chipcard is on
www.sourceforge.net (do search on ecdsa).

jeffj@panix.com (Jeff Jonas) writes:
1) the IBM system 1130 had a CRT option but it required
most of the computer to run it. The system 1130 was pre-IC,
it used little metal cans with several transistors and components inside.

or 2250m4. somebody at cambridge had done port of spacewars to to
2250m4 by the early '70s. also the initial/original/geneiss code for
the internal network was some stuff between the 1130 (aka 2250m4) and
the 360/67.

2250m1 had its own channel controller directly attached to 360
channel. I got to use one at the university. lincoln labs had done a
fortran graphics library for cms that drove the 2250m1. I modified the
cms editor to use the lincoln labs 2250 library for a simple
full-screen editor (circa late '68).

boeing huntsville had large 360/67 duplex with some number of 2250m1
running version of mvt/r13 (aka running 360/67 in '65 mode). An issue
with mvt and single flat, real, linear address space with multiple
long running programs was storage fragmentation (i.e. space for single
program needed to be contiguous ... a single 2250 program tended to
run for long periods of time). they modified mvt/r13 to use the
hardware relocation of the 360/67 to address storage fragmentation
i.e. no paging or page faults were supported ... but they used the
virtual storage mechanism to provide the appearance of contiguous
program storage).

About the time BCS (boeing computing services/systems) was formed in
late '68/early '69 ... the boeing huntsville machine was shipped to
seattle.

Symmetric-Key Credit Card Protocol on Web Site

jsavard@ecn.ab.ca () writes:
Ah. I misunderstood; and I was disappointed when your URLs led to news
archives rather than a complete description of the protocols in question,
having misunderstood your posts.

contains pointer to the ANSI bookstore for getting copy of the x9.59
standard that describes the protocol(?). there is the small issue of
standards being copyrighted and rules about ordering them from the
standards organization.

it also contains a pointer to a detailed description of mapping x9.59
to iso8583 (8583 is the internal standard for payment network
messages).

in effect, x9.59 basically describes a signed payment message. The
mapping of x9.59 to iso8583 is from the standpoint of providing
end-to-end message integrity and authentication.

a large percentage of other payment messages that have used encryption
or other forms of digital signature ... have been implemented with the
encryption and digital sigatnures being stripped off at "boundary
nodes". this results in very simple violation of basic security
principles, no end-to-end security, integrity and/or authentication.

Ian Wade writes:
Agreed, but everything you have said relates to the originator. My
concern is to do with the recipient being able to deny receiving a
message. "Proof of existence ofa a message" will manifest itself when a
message is sent (through its accompanying signature), but has nothing
to do with acknowledging receipt of the message.

MVS on Power (was Re: McKinley Cometh...)

J Ahlstrom writes:
These are MILLSTONE code and MILLSTONE applications
pure and simple.

while a lot might be MILLSTONE code ... a lot of the applications are
like payroll, funds transfers, financial networks, check processing,
air traffic control systems, reservations, etc ... aka the nitty
gritty of business operations around the world.

cecchi@signa.rchland.ibm.com (Del Cecchi) writes:
Another critical area of John's work is "logic simulation." John invented a
generalized special-purpose logic simulator, which runs many orders of
magnitude faster than conventional simulations. In the 1980's a special
purpose simulation machine known as the Yorktown Simulation Engine (built
to John's design) was used to simulate an entire computer at the logic gate
level. The logic gate simulation produced answers faster than older
computers executing programs in the machine's basic instruction set. Many
of these simulation engines and their descendants are in use today. They
enable the designer to verify and fix logical design before committing it to
silicon. This substantially shortens VLSI (very large scale integration)
development times, and such special simulation engines are widely used
within the industry.

I believe the LSM (los gatos simulation/state machine ... later the
logic simulation/state machine) predated YSE (although john may have
had lots of influence on the LSM). I believe the LSM was unique in
that it included time support and could handle both asynchronous chips
and digital/analog chips (aka use for things like disk read/write
heads).

Unisys A11 worth keeping?

"Rupert Pigott" <dark.try-eating-this.b00ng@btinternet.com> writes:
a "Do this" it's a "Polite request", which the VM system can
ignore if it so chooses (a la Lynn's do nothing tuning
parameters). There are definately some apps which do need this
level of control, and they are a nightmare under systems that
don't give you that level of control.

one was real-time tuning parameters .... the code could monitor and
tune faster than a human could reduce data for the day and set some
parameters for the next day .... even tho a day's average data didn't
re-act to moment instantaneous changes.

i also have done a paged-mapped file system ... allowing the
application to give hints ... and then the actual system looked at the
hints vis-a-vis real time information and attempted to recouncile the
hints with real time information. this is different than a system
programmer trying to represent tuning parameters as real time
information ... when the system could do real, real time information
and tuning.

there are a lot of os/360 apps that did really good job with the
asynchronous & direct mapped I/O facilities of maximizing thruput with
lots of buffering and overlapped operation. this was a "real storage"
and "real i/o" paradigms .... mapping that capability into a virtual
memory & page-mapped filesystem offered some interesting challenges.

one of the original tss/360 operations was to try and implement
one-level store with no hints .... just map the complete file to
virtual memory and fetch a page as the page faults occured. This is
one of the things that gave tss/360 such poor thruput ... the page
mapping of both data and executables and then let the pages be fetched
synchronously a single (4k) page fault at a time.

At the simplest level ... remapping the os/360 compiler executable
overlays into paged mapped ... you could get hints on approx. what all
was going to be needed when ... and when it was no longer needed;
hints about intelligent throwing away of pages no longer needed was
almost as useful as hints about multi-page fetch and/or overlapped
fetch hints.

SABRE ... morphed into ACP, TPF, etc ... although AA/AMR still refers to
their service as SABRE.

little side-note on the 8100 ... was the uc.5 microprocessor ... same
processor used in the 3705 and the service processor for 3081 (among
other things). while it includes 8100 ... it doesn't say anything on
the s/32, s/34, s/38 (as/400), etc.

I've heard some strange rumblings about the author of Bitnet LISTSERV
threatening to trademark the name "LISTSERV" (and perhaps "listserver"
and other variants) and also threatening to sue people who use those
terms "inappropriately". There also seems to be "warfare" between the
Bitnet LISTSERV camp and the Unix(tm!) listserv camp. (I hear this
from folks who monitor various Internet newsgroups.) Does anyone know
about this? It seems mighty bizzare!
Larry Chace, Cornell Information Technologies

APPENDED 10/27/93 11:03:07 BY CUN/CHACE

Append on 10/27/93 at 13:54 by David Boyes ( DBOYES@RICEVM1 ) Rice University:

It's true. L-soft (Eric Thomas' new company) is making requests that
the Unix LISTSERV-wannabees call themselves something different to
prevent confusion with the real thing. Not too unreasonable a request,
considering the extremely poor quality and reliability of the Unix
imitators. I certainly wouldn't want my product named the same as
most of them.

APPENDED 10/27/93 13:54:51 BY RIC/DBOYES

Append on 10/27/93 at 13:55 by Mark A. Stevens, ECN, 708-235-2204:

To add to the rumors (to get clarification) CREN is trying to get a
person(s) to write a full-featured listserv for Unix(tm) toward the
removal of BITNET?

At least one Unix wannabe, written by Anastasios Kotsikanos, has been
renamed; he is now calling it the Unix List Processor. One problem is
that many folks have gotten used to using variants of "LISTSERV" as
the generic term (e.g. listserve, listserver, list server). To
compound the confusion, many people use "listserv" as the generic for
"mailing list" -- ie both to refer to the software and to the
discussion groups it supports. So far no replacement generic has been
universally agreed upon. "Mailing list processors" seems good to me.

Even with the rename of Tasos' (his nickname) tool, many sites still
use "listserv" as the name of the user ID to send mail to in order to
subscribe to a mailing list or change options. And Tasos still has
people visit a /listserv directory on his FTP server to fetch his
code. So there's lots of opportunity for confusion.

Eric Thomas has announced intent to deliver a Unix version of
LISTSERV, which should go a long ways towards lessening confusion. As
David says it seems the Unix wannabes are notquitethereyets.

Charles Richmond writes:
At my college, when computer usage figures came out for the
mainframe (an IBM 370/155), the chemistry department was far
and away the largest user of computer time... I assume that
this is probably typical.

something similar for sjr 195 ... that and some of the floating head
simulation work (modeling the air bearing effect). also facilitated
geting time on the 3033 in the product test lab (bldg. 15) for
additional time for some of the chemistry work (aka disk engineering
and product test lab processors were being used for i/o testing of new
disks ... so cpu use was almost negligible ... if you weren't
interferring with their official use ... certain arraingments could be
made).
http://www.garlic.com/~lynn/subtopic.html#disk

Unisys A11 worth keeping?

"Rupert Pigott" <dark.try-eating-this.b00ng@btinternet.com> writes:
Now you bring up the idea of hinting which pages are "finished with"
it might well be the better of the two halves of the same coin to
implement... Beats relying on LRU & pseudo LRU to get it right every
time. :)

LRU ... "least recently used" replacement .... based on assumption
that any location recently used is likely to continue to be used
(typical virtual memory, processor caches, etc).

under various kinds of heavy & pathelogical loads ... straight LRU
(and straight WSCLOCK) tends to degenerate to FIFO. I did a variation
on 2-bit, 2-handed clock that instead of degenerating to FIFO would
degenerate to RANDOM (if there otherwise wasn't useful information on
which to make a reasonable decision ... making a random ... quick ...
choice is better).

Latency benchmark (was HP Itanium2 benchmarks)

"Norbert Juffa" writes:
I can certainly agree with the undesirability of random replacement. But
what's wrong with LRU (or pseudo-LRU)? It let's one control what's in
the cache precisely. Or am I missing something?

Which replacement strategy do you consider the most desirable?

optimal replacement strategy ...

LRU ... least recently used ... is based on assumption that any data
recently used by a program is likely to continue to be used by a
program ... and that data that hasn't been used for a while, won't
likely to be used for a while. That is just a general assumption. For
one thing, LRU degenerates to FIFO under all sorts of situations.

long ago, and far away ... did a psuedo-LRU (variation on wsclock)
that under situations where normal LRU degenerated to FIFO ... it
would degenerate to RANDOM (and that in detailed simulations, normal
psuedo LRU tended to get within 10-20 percent of the performance of
true LRU while the random variation tended to beat true LRU by 10
percent).

VT50, VT51, VT52, VT55, VT61, VT62 terminals (was Re: Weird...)

jmfbahciv writes:
It was flat on top (for ones very important accessories: listings,
coffee mugs, and ashtrays), it had a little shelf one could but
a steno pad (that's when I switched to using steno pads to keep
notes and program snippets in), and it had a typable keys.
It was also "cheap" enough so that we could each have one
in our offices. I don't believe I ever had to call field
service due to breakage. VR05s were always testing one's
eyesight. I routinely dropped one before setting down to
do any work with it. It was a physical incantation that
usually worked.

when they instituted a rule that you needed VP-level sign-off for
3270s in your office ... we did the business analysis that the 3-year
amortized cost of 3270 was less per month than a telephone that
everybody got on their desk as a matter of course.

that was just about the same time that some middle management
discovered that a number of corporate executives had started using
email ... and in a number of cases a whole organization's year's quota
for 3270s for engineers and programmers got rerouted to middle
management so they could appear to be doing email also ... aka all of
a sudden it became a status symbol.

later on, such things became more institutionalized ... aka nearly all
the internal PS2m80s going to managers' offices even tho they never
used it for anything but 3270 emulation reading email.

ibmbama@YAHOO.COM (Howard Rifkind) writes:
The mainframe is dying and evolving at the same time. How many of
you folks have gone out lately looking for a mainframe systems
programmer or Cobol programming position? Find a position yet
within a reasonable distance from where you live? I work in
N.J. with a Systems Programmer for Canada. His commute is about
850 miles when he wants to go home for the weekend. How about
that. Yes, there will always be a mainframe of some sort because
of the types of data kept on them and the speed, but only with out
sources and very large orginizations. Start learning LINUX on the
mainframe...that's where things seem to be going.

at one point the only computers that existed were in the data center.
everybody brought their card deck to the data center to be executed,
regardless of the type or nature of the calculation to be performed.

introduction of departmental and personal computing allowed some
things to migrate off the mainframe ... that were much more of
departmental or personal computing type of tasks.

there were problems that some number of enterprise level tasks also
migrated to departmental or personal computing platforms ... in some
cases putting the enterprise at risk. part of the reason for this
migration was the difficulty and frequently long lead time of doing
any change/enhancement at the enterprise level ... no matter how
trivial.

Another contributing factor was the difficulty of implementing
local-domain applications that would access corporate data at the
mainframe. there was a legacy problem that initial PC success was
greatly facilitated by being able to do 3270 emulation ... but later
on when it came time to move on to more sophisticated operations
... there was something of battle with entrenced forces (business
units that had significant revenue from 3270 emulation products didn't
want to see replaced with peer-to-peer high-speed access
products). The resistance to introducing peer-to-peer high-speed
access products into the market place contributed significantly to
migration of enterprise data off the mainframe.

we got our hand-slapped for coming up with 3-tier architecture ...
when the whole SAA client/server effort was trying to significantly
reverse trends of applications to the PC (Lotus 123 running on the
mainframe?) ... and at the same time limiting things to effectively
3270 emulation products for access by the PC clients to mainframe data
(aka single t/r lan was more than "sufficient" for 300-500 PCs).

"Alan T. Bowler" writes:
Only some mainframe terminals ran in block mode.
Dumb ASCII async terminals were the usual choice
for timesharing use on non-IBM hardware.

i added tty/ascii support to cp/67 back when undergraduate in late
'60s. IBM picked up the code and shipped it in the product. it had a
sort of design glitch ... and later when somebody at MIT modified the
code ... it resulted in numerous kernel crashes that day.

they were supported in half-duplex mode by the (ibm) 2702 terminal
controller which recognized certain line-end characters and generated
interrupt to the processor when those characters were encountered.

at the university, we ran into some issues with the 2702 terminal
controller and a couple of us started a project where we built a
terminal controller starting with interdata/3 and reverse engineering
the ibm channel interface and building our own board to interface to
the ibm mainframe channel ... supposedly credited with originated the
ibm pcm controller business:
http://www.garlic.com/~lynn/subtopic.html#360pcm

the interdata/3 supported the tty terminals in full duplex and then
played some games mapping that to half-duplex for the 2702 controller
emulation. later, it was enhanced to be a combination of interdata/4
with interdata/3 dedicated to line-scanner function.

I believe the use of the interdata as termianl controller was then
expanded to other mainframes (not just ibm). Also, perkin/elmer
eventually bought it up ... and they were sold under the perkin/elmer
brand. I ran into one 5-6 years ago in major transaction processing
datacenter still handling heavy traffic load.

there was the "yale iup" for the series/1 in the '80s which also
provided full-duplex ascii support for tty terminals to aix/370 (aka
the port of ucla locus to 370 and ps/2 and released as ibm product).

Anne & Lynn Wheeler writes:
this was when cms\apl was first created (port from phili's apl\360 to
cms environment and making available up to 16mbyte workspaces instead
of just 16k or 32k workspaces). there was an enhanced sense of
security when all the most valuable corporate data was loaded onto the
cambridge system so these guys could run their financial & business
analysis applications. eventually these guys got their own cp/67
systems down and new york. random past ref:
http://www.garlic.com/~lynn/2002h.html#34 Computers in Science Fiction

there is some line about people who don't read history are doomed to
repeat the same mistakes over & over again.

about every two years some executive in the HONE chain of command or
somewhere else in DPD would make some statment about HONE being hosted
on MVS platform. Possibly 20-30 percent of the HONE staff would be
sent off to port HONE to MVS platform. After 4-6 months it was be
deemed an utter and total failure ... and the porting effort would
quietly fade away. The problem being that it wasn't politically
correct to point out that it was a utter and total failure to try and
get something working on MVS. Since it wasn't politically correct to
document it ... the whole issue would crop up in another 18 months or
so and the exercise would get repeated.

If the situation assumes that keys have been securely exchanged
... and that given a secure, encrypted channel using AES CFB ... then
can a MITM modify data in the transmission that goes undetected?

In a public key exchange ,,, can a MITM substitute their own public
key undetected?

In the case of a public key exchange with certificates ... can a MITM
create a situation where a valid certificate is substituted aka there
has been lots of discussion regarding how a MITM would go about
getting a valid, acceptable certificate ... say a SSL domain name
server certificate.

one of the most recent has been in crypto mailing list regarding
buying a root key that is currently acceptable to a majority of
browsers, try search engine on: SSL Certificate "Monopoly" Bears
Financial Fruit.

part of the issue is that while you might use certificates to a
current real time public key exchange ... there is some kind of chain
of trust going back to some procedure where you have accepted one or
more "root key" by some method ... and all subsequent trust decisions
you make involving certificates involve both the method by which you
accepted that "root key" ... and all subsequent operations that the
owner(s) of that root key might have been involved in ... as well as
the method you use to protect and secure your list(s) of acceptable
root keys.

things like PGP just eliminate the chain of trust ... they maintain a
(secure) list of trusted keys and they use some out of band process
for introducing keys into that list of trusted keys.

certificate infrastructures, in effect operate the same way .... but
you might not even be aware of what your list of trusted keys are
and/or what processes that were used to establish them and/or maintain
them. These trusted keys then are used by entities that you may have
no idea about to generate certificates (as opposed to directly signing
pieces of email) ... which you then accept on complete faith. part of
this is a possible myopic focus on the bit-stream that composes a
particular public key certificate ... ignoring the whole rest of the
business processes involved in creating the infrastructure for the
operation of public key certificates ... certificates don't actually
eliminate any of the MITM attacks on public key exchange ... they are
just moved around ... at the same time adding a whole bunch of new
attacks.

security typically is business process ... with business processes
like authentication, integrity, and privacy, confidentiality,
availability, etc.

encryption nominally isn't a business process .... it is a technology,
that might be used to address things like integrity and privacy
... aka encryption can be used to keep information private
... encryption can also be used to recognize whether data has been
modified in transit (aka integrity).

from above
security
(1) The combination of confidentiality, integrity, and
availability. (2) The quality or state of being protected from
uncontrolled losses or effects. Note: Absolute security may in
practice be impossible to reach; thus the security 'quality' could be
relative. Within state models of security systems, security is a
specific 'state' that is to be preserved under various
operations. [AJP] (I) (1.) Measures taken to protect a system. (2.)
The condition of system that results from the establishment and
maintenance of measures to protect the system. (3.) The condition of
system resources being free from unauthorized access and from
unauthorized or accidental change, destruction, or loss. [RFC2828] A
condition that results from the establishment and maintenance of
protective measures that ensure a state of inviolability from hostile
acts or influences. [NSAINT] All aspects related to defining,
achieving, and maintaining confidentiality, integrity, availability,
accountability, authenticity, and reliability. NOTE - A product,
system, or service is considered to be secure to the extent that its
users can rely that it functions (or will function) in the intended
way. This is usually considered in the context of an assessment of
actual or perceived threats. [ISO/IEC WD 15443-1 (11/2001)] [SC27] The
combination of confidentiality, integrity, and availability. [FCv1]
The quality or state being protected from uncontrolled losses or
effects. Note: Absolute security may in practice be impossible to
reach; thus the security 'quality' could be relative. Within
state-models of security systems, security is a specific 'state', that
is to be preserved under various operations. [JTC1/SC27/N734]

Anders Thulin writes:
When you want to try a larger chunk of the sum, go for the
Common Criteria at http://www.commoncriteria.org/.

i've somewhat viewed the orange book and related specifications as
being applied to generalized, multipurpose computers .... which runs
into all sorts of problems. some of the common criteria has been
defining very targeted security specifications for specific
environments and operations .... aka a firewall might be implemented
using a multipurpose computer but because of lots of mitigating and
compensating procedures a large amount of the generalized security
specification may not be applicable to its operation.

repeat of the definion
security
(1) The combination of confidentiality, integrity, and
availability. (2) The quality or state of being protected from
uncontrolled losses or effects. Note: Absolute security may in
practice be impossible to reach; thus the security 'quality' could be
relative. Within state models of security systems, security is a
specific 'state' that is to be preserved under various
operations. [AJP] (I) (1.) Measures taken to protect a system. (2.)
The condition of system that results from the establishment and
maintenance of measures to protect the system. (3.) The condition of
system resources being free from unauthorized access and from
unauthorized or accidental change, destruction, or loss. [RFC2828] A
condition that results from the establishment and maintenance of
protective measures that ensure a state of inviolability from hostile
acts or influences. [NSAINT] All aspects related to defining,
achieving, and maintaining confidentiality, integrity, availability,
accountability, authenticity, and reliability. NOTE - A product,
system, or service is considered to be secure to the extent that its
users can rely that it functions (or will function) in the intended
way. This is usually considered in the context of an assessment of
actual or perceived threats. [ISO/IEC WD 15443-1 (11/2001)] [SC27] The
combination of confidentiality, integrity, and availability. [FCv1]
The quality or state being protected from uncontrolled losses or
effects. Note: Absolute security may in practice be impossible to
reach; thus the security 'quality' could be relative. Within
state-models of security systems, security is a specific 'state', that
is to be preserved under various operations. [JTC1/SC27/N734]

"Rupert Pigott" <dark.try-eating-this.b00ng@btinternet.com> writes:
I don't want to stick up for Amtrak, god knows I don't, but I looking
after crumbling railway infrastructure ain't cheap. :)

part of the issue is the trucking infrastructure is heavily subsidized
just thru the road system .... aka nearly all the road infrastructure
costs are related to heavy trucking. the building of the original
railroad infrastructure was heavily subsidized by the large land
grants ... but that has essentially been pretty well bleed off over
the years. The issue now is the day-to-day operational revenues
vis-a-vis day-to-day operational & infrastructure costs (trucks
vis-a-vis railroads).

As a starting assumption that nearly all the road infrastructure costs
(original build, ongoing maint., etc) are almost totally heavy
trucking related ... then it would be logical(?) to start with all
fuel taxes supporting road systems be only applicable to heavy trucks
... in effect all the current fuel tax income currently spread across
the whole driving population but recovered solely from heavy trucking
activity).

I have no idea what the current percentage of total fuel consumption
is by heavy trucking ... just for argument sake lets assume three
percent. With fuel tax running around 40 cents (federal + state), then
if this was to be totally recovered by heavy trucking fuel consumption
... it would need to be raised by a factor of thirty to around twelve
dollars per gallon.

The primary goal of the design of the pavement structural section is
to provide a structurally stable and durable pavement and base system
which, with a minimum of maintenance, will carry the projected traffic
loading for the designated design period. This topic discusses the
factors to be considered and procedures to be followed in developing a
projection of truck traffic for design of the "pavement structure" or
the structural section for specific projects.

Pavement structural sections are designed to carry the projected truck
traffic considering the expanded truck traffic volume, mix, and the
axle loads converted to 80 kN equivalent single axle loads (ESAL's)
expected to occur during the design period. The effects on pavement
life of passenger cars, pickups, and two-axle trucks are considered to
be negligible.

Traffic information that is required for structural section design
includes axle loads, axle configurations, and number of
applications. The results of the AASHO Road Test (performed in the
early 1960's in Illinois) have shown that the damaging effect of the
passage of an axle load can be represented by a number of 80 kN
ESAL's. For example, one application of a 53 kN single axle load was
found to cause damage equal to an application of approximately 0.23 of
an 80 kN single axle load, and four applications of a 53 kN single
axle were found to cause the same damage (or reduction in
serviceability) as one application of an 80 kN single axle.

ab528@FreeNet.Carleton.CA (Heinz W. Wiggeshoff) writes:
Starting with artillery tables, it's hard to decide which general
area of engineering has taken up more CPU time within the realm of
a.f.c (i.e. 20 years ago). I believe it was McDonnell Douglas that
unloaded 6 high end IBM 91 or 95 or 195s in the early 70's at a time
that I was still mightily impressed with the 85. I guess that old
Nastran code could really crunch through the CPU cycles.

when i did stint at boeing for the BCS (boeing computing
system/services) startup (was one of the first couple dozen employees)
... renton had a really large datacenter ... joke about there
constantly being 2-3 360/65s boxed in the hallways waiting staging for
installation on the machine room floor.

for disaster recovery purposes, all of renton data center was later
duplicated in everett (one of the scenarios is mud-slide down the
nearby mountain; which they take seriously ... some of the small towns
nearer the mountain have sirens and drills).

one story they told is that the day after the 360 announcement
... boeing walked into their local salesman and placed an initial
order for something like twenty 360/65s (actually would have been
360/60s on announcement day). his commission exceeded the top
executive salary that year ... and as a result the next year corporate
hdqtrs created the quota plan (rather than straight commission). That
year, his commission also exceeded the top xecutive salary ... and
they up'ed his quota again. He then left and formed his own computer
consulting and services company (which much later got bought by GM,
and then he formed a new computer consulting and services company and
also ran for president).

its been 15 years or so since some SGI engineers were looking at
applying graphics engine pipelining to protocol engine for TCP/IP FDDI
offload (when FDDI was the only 100mbit/sec around, excluding multiple
parallel 50mbyte HYPERChannel).

my wife did a stint in POK responsible for loosely-coupled
architecture. while there she authored peer-coupled shared data
architecture ... which didn't see much use until sysplex, except for
possibly ims hot-standby.

part of the reason was we were looking for scalable support and none
of the rios chips had support for consistent shared memory ... and the
power/pc with shared memory support was still years away (at the time
we started on high-availability/cluster multiprocessing in the late
'80s).

and then after both taking an early out in '92 ... spent some time
talking to sequent, convex, and tandem. both sequent and tandem
claimed to have done significant (shared-memory) parallization work on
the NT kernel in the early to mid 90s.

sequent had a snoopy bus with intel processors that went to 32-way
... but I believe those configurations were primarily supported by
dynix (sequent's unix) ... i believe the NT work was primarily on
4-way to 8-way processor shared memory configurations.

side note ... if anybody remembers the netscape downloads of the
mid-90s ... they had multiple large servers netscape1, netscape2,
netscape3, ... and it was suggested that people sort of randomly try
different nodes looking for lightest loaded. they eventually installed
a large sequent configuration as netscape20 ... and the problem sort
of just evaporated (as did most of the multiple servers). at the time,
sequent dynix had possibly the best scaling tcp/ip support around (at
least in terms of supporting large number of concurrent sessions, they
were also one of the first to have scalable solution to the dnagling
finwait opportunity; at the time there were some situations where
processors were spending >90percent of the cpu running the dangling
finwait list).

sequent and data general both did 256-way intel using SCI
... basically 4-way intel quad-board sharing local cache and 64-port
SCI configuration implementing 256-way global shared memory (sequent
took relatively standard intel 4-way SMP quad-boards and did the work
to make it work with 64-port SCI).

note that there are both lcmp clusters and shared-nothing clusters.
lcmp clusters typically have shared-access to disk (while not having
shared-memory). shared-nothing clusters (like wolfpack) rely on
network message passing ... to implement things like replicated data.

press release from 10/12/95 (remember AT&T was also NCR)
Companies Voice Support for Microsoft Clustering Strategy

AT&T Global Information Solutions welcomes the opportunity to
participate in providing customers with an industry standard for
clustering technology. AT&T has years of experience in delivering
clustering and fault-resilient technology with AT&T(R)
LifeKeeper. Through our collaboration with Microsoft, we plan to
protect and enhance our customers' investment in Windows NT Server
solutions from AT&T, and to continue to deliver superior
high-availability solutions that drive and utilize future
industry-standard clustering technology for Windows NT Server.'

extract from 8/96
http://www.winntmag.com/issues/Aug96/wolfpack.htm

What Is Wolfpack?

Several leading NT Server systems vendors, including Compaq, Digital
Equipment, HP, NCR, and Tandem, have been independently working on
clustering solutions for a few years. These vendors agreed to pool their
expertise with Microsoft in an initiative to produce a cross-vendor standard
for NT Server clusters. This group wanted to give NT Server customers the
greater choice and flexibility they wanted. So in October 1995, Microsoft
announced its intent to develop strategic partnerships to fashion a new
clustering standard with the code name Wolfpack.

This name and many of its technology goals derive from Pfister's book. In
Chapter 4, Pfister describes a cluster as a "pack of dogs." While searching
for a code name for the API, Microsoft came across this book and decided to
describe clusters with the name Wolfpack, which sounds a lot cooler than
Dogpack.

Wolfpack is an alias for clusters, and the six core vendors in Microsoft's
clustering project consider themselves members of the Wolfpack. These
members are Digital, Compaq, Tandem, Intel, HP, and NCR. Each partner
contributes key components of its existing technology. Other vendors,
including Amdahl, IBM, Octopus, Vinca, Marathon, Stratus, and Cheyenne, have
agreed to support the Wolfpack API. These vendors are part of Microsoft's
Open Process, which includes about 60 vendors and customers who are part of
design previews during various stages of Wolfpack development.

Wolfpack describes a set of cluster-aware APIs, NT cluster support, and a
clustering solution (which means a vendor can claim to be Wolfpack compliant
while competing with the Wolfpack solution on a different level--so if a
vendor claims to support Wolfpack, you need to ask how). Here's a detailed
explanation of each Wolfpack component.

also from
http://www.winntmag.com/issues/Aug96/wolfpack.htm
Wolfpack: The Solution

Microsoft will deliver Wolfpack, the solution, in two phases.
Phase 1 is two-node availability and scaling clusters (a new version of SQL
Server will let you work on the same database from two servers at once).
Phase 2 will allow more than two nodes in a cluster.

Reread the first paragraph in this article. That scenario describes a June
1996 demonstration of a Wolfpack availability cluster solution at PC Expo in
New York City. This two-node failover capability is the basis for Phase 1 of
Wolfpack (early 1997 is the estimate for delivery). The price for Wolfpack's
Phase 1 release is not set, but one rumor is that NT Server will include
Wolfpack at no additional cost. As I write this article, Compaq, Digital,
HP, NCR, Amdahl, Stratus, and Tandem have all announced plans to OEM the
Wolfpack-based cluster solution.

The next step in Phase 1 (set for the second quarter of 1997) will be an
open certification program with the goal of expanding the market for
two-node cluster solutions and giving NT Server customers a greater
selection to choose from. Microsoft is also committed to making Wolfpack
available on Intel, Alpha, PowerPC, and MIPS chips.

M$ SMP and old time IBM's LCMP

"Rupert Pigott" <dark.try-eating-this.b00ng@btinternet.com> writes:
The wierd thing is : It all seemed to go very quiet after phase 1
(ie: basic fail-over). I was wondering if they renamed it or just
silently dropped it.

Stuff like Longhorn can't really help it much either (if you
believe that Bill is not implementation on backward compatability).

i thot that the microsoft terabyte satellite image internet server was
some sort of shared-disk clustering.

the big new thing in clustering seems to be the grid stuff that the
high-energy physics guys seemed to have started ... it was all over
supercomputer 2002 in denver at the start of the year ... and now
"grid" seems to be the new, in term.

SHARE Planning

jmaynard@CONMICRO.CX (Jay Maynard) writes:
Assuming I can get from the airport to my hotel, this sounds like good
advice. I've made reservations (an Expedia vacation package wasn't too
expensive for two days...), and will definitely be there.

(Plug: I'm doing Session 2880, The Hercules S/370, ESA/390, and
z/Architecture Emulator, at 6 PM Tuesday. If you want to find out the real
truth about Hercules, that's the place.)

i just got note/invitation to (30th anniv) event 6pm wed. 21st.

i had been part of announcement at spring '68 share meeting in houston
... but that 30th year has come and gone (will be 35 years next
spring).

Funeral for a friend - Infiniband

"Tarjei T. Jensen" writes:
The problem seems to be the way Irix handles TCP acknowledgements on high
bandwidth devices. It looks like it does not bother to keep track of the
likely send buffer of the other host (it is likely that the send buffer is
of the same size as the receive window) and instead expect everybody to use
the 60KB window Irix announces. This fails spectacularly with Netware 6
which uses a TCP window size of 6 - 12KB.

side note that while some number of SGI people spearheaded the effort
... it was somewhat independent and a lot of the prototype work
actually was done on sun platforms. from some long dusty past ... I
served on the group's technical advisory board. somewhere in the
basement, I think I still have 3 foot pile of tab documents plus a
hardcover book by some guys at univ. of virginia(?).

i had been part of announcement at spring '68 share meeting in houston
... but that 30th year has come and gone (will be 35 years next
spring).

i guess i missed CICS. the university that I was at was one of the
beta-test sites ... and I had the task of supporting and debugging
CICS during the beta-test period (the beta-test end and my graduation
happened about the same time).

the spring '68 announcement that was held at houston share was for
cp/67 ... the precusor to vm/370.

Yes I was aware of the 5250 (boy not in a long time though). I suspect that
they are floating the 5250 "replacement" as a way to get rid of the 3270..
Maybe I am reaching but its not the first time IBM has started out doing
something in one areana and taking it company wide.

Reaching, maybe. But I think we have all seen the signs, no?

Ed

reference to something similar going on 15 years ago when we started pushing
3-tier architecture and were being opposed by the SAA forces.

Mok-Kong Shen <mok-kong.shen@t-online.de> writes:
To be fair, I suppose that one should mention that
secret agencies of many (presumably all) other
countries of the world are going similar things, even
if they may not have equally high material, technical
and intellectual resources. Assuming that these are
anyhow bound by morals/ethics would be at least as
questionable as (if not more questionable than)
assuming that all clergymen are morally impeccable
persons.

back in early to mid '80s when the large computer company i was
working for starting looking at allowing people to access email while
traveling ... a vulnerability assesement highlighted hotel PBXs as one
of the most vulnerable points. as a result, the company built custom
2400baud modems that included session key generation and exchange and
des encrypted transmission.

at least corporate espionage was issue in the US ... and both
corporate and gov. espionage was issue outside the US. I seem to
remember something in the news from the period about gov. agents in
some european country going thru hotel rooms as part of industrial
espionage efforts.

evem prior to the "road warrior" issue .... all of the telco lines for
internal corporate network required link encryptors (for some period,
the claim was that half of all link encryptors in the world were
installed on the internal network, that claim may have been qualified
with "non-gov" ... i don't remember) .. . I remember this causing some
issues with gov/PPTs in europe ... especially lines that crossed
country borders.

SHARE Planning

Anne & Lynn Wheeler writes:
i guess i missed CICS. the university that I was at was one of the
beta-test sites ... and I had the task of supporting and debugging
CICS during the beta-test period (the beta-test end and my graduation
happened about the same time).

MIT says I don't live in the USA

Mickey writes:
I've tried to download PGP from the MIT site and it tells me that I
don't live in the USA. The last time I checked, Chico, CA was in the
US. What can I do?

mostly i've seen it happen when ISP doesn't provide reverse-DNS for
the ip-address you are using (the usual process i've seen is do
reverse-DNS mapping from ip-address to domain name ... and then do
some sort of check on the domain name).

I also gave a talk on it at the Intel Developer's Forum last year,
including a claim that pretty much as it currently existed, it could
do all the things that were requirement for trusted computing
module. A copy of this presentation is also at the above URL (slides
on assurance).

Server and Mainframes

msimpson@UKY.EDU (Matt Simpson) writes:
When we got our 9672, it was called a "Parallel Enterprise Server". I
forget what our Multiprise 3000 is called. I think it's also some kind
of "server". But when I pestered the salesweasel, she started throwing
out model numbers, and that was one that she mentioned as being a
"mainframe".

I think that in some other ng (like a.f.c.) where this has been a
topic, mainframe started out denoting the main frame in the (telco)
central exchange .... to distinquish between it and all the other
frames/boxes. This appears to have migrated to the dataprocessing
floor ... picking out the central processing unit from all the other
hardware boxes that supported the central processing unit. It was
also used to distinquish between the multiple frame data processing
units and the single frame minicomputers.

specifically within the ibm customer world ... it was associated with
the ibm 360, 370, etc lineage of computers and along the way became
somewhat synonymous with their applications and capabilities.

with hardware technology advances, it became possible to build single
frame processors that would run the "mainframe" operating systems and
the associated applications.

also over the years, with various minicomputer and microcomputer
evoluation acquired applications that had lots of the attributes that
had previously only been seen in the "mainframe" dataprocessing world.

in any case something that may have originally started out as
highlighting the distinction between the "large" multiple box
dataprocessing and the single box minicomputers ... has become quite
ambiquous ... loosing a lot of its original specific meaning ... and
along the way acquiring a lot of additional connotations.

"Ben Mord" writes:
I have a question about exactly what guarantees SSL can and can not provide,
specifically in the absence of client certificates. I am particularly
interested in channel integrity - the inability of a third party to hijack
an established connection and pass themselves off as the one who first
initiated the channel.

pornin@nerim.net (Thomas Pornin) writes:
If the server has a certificate, and the client has none, SSL guarantees
that:
-- The client has a strong assurance of talking to the "right" server
(for HTTPS, the server must use a certificate which contains its DNS name,
and the web browser verifies that property).

the reason for the SSL domain name certificate is because of questions
with regard to the integrity of the domain name infrastructure (i.e.
a client compares the domain name that it thinks it is talking to
with the domain name in the server certificate).

as mentioned in some mailing lists there is starting to be concerns
about browser tables of trusted public keys (i.e. the list of public
keys for certification authorities that may sign acceptable server
certificates).

there is also the issue that when processing a domain name certificate
application ... the certification authority must rely on the
authoritative agency responsible for the information being
certified. In the case of domain name certificates, the authoritative
agency is the domain name infrastrucure ... the very same domain name
infrastructure who's integrity questions give rise to needing ssl
server domain name certificates in the first place.

Now there are some integrity enhancements ... in part proposed by the
certification authorities to improve the integrity of the domain name
infrastructure ... so that the certification authorities can trust the
certification of the domain name request that they perform with the
domain name infrastructure.

A possible obvious catch-22 is that if the domain name infrastructure
intregrity is improved for use by the certification authorities ... it
may be of sufficient integrity for everybody's use and ssl domain name
certificates would no longer be necessary.

john.veldhuis writes:
Q: I need some help on mapping a digital cert to an user account in
Windows NT environment. During logon, instead of user name and
password, the cert would be read from a smart card. User would be
asked to key-in the corresponding PIN. If authenticated, s/he would be
allowed to access the resources the same way as s/he would do with
user name and password. Is there any such product available in the
market for windows NT?

note that working draft PKINIT enhancement to kerberos (m'soft
authentication infrastructure is kerberos).

pkinit draft specifies that a message is digital signed (in the case
of a hardware token, the hardware token "signs" the message). the
message is designed to prevent replay, sniffing, or skimming attacks.

any hardware token that just transmits a fixed value ... even if it is
a cert ... is subject to replay/sniffing vulnerabilities (aka somebody
evesdrops on any fixed transmitted value ... and then is able to
reproduce it is capable of defeating the system). This is the same as
the replay/sniffing vulnerabilities on passwords ... once you find out
the value .... you can operate fraudulently.

unfortunately that are some number of hardware tokens that actually do
just transmit a fixed value for authentication .... putting them down
close to the password paradigm from an integrity standpoint (subject
to sniffing/skimming vulnerability and subsequent fraudulent replays).

SSL integrity guarantees in abscense of client certificates

Edward Elliott writes:
Not as long as the internet runs on IP v4. If I can spoof an IP
address, I can make the other party think I'm coming from the domain
that IP address is in.

one of the certification authority related proposals for improving the
integrity of the domain name infrastructure is for the domain name
owner to register their public key at the same time they register the
domain name. from then on all communication between the domain name
owner and the domain name infrastructure is then digitally signed and
verified using the public key stored in the account record for that
domain name.

now it turns out that the existing domain name infrastructure
implementations have provide a generic facility for real-time
distribution of information, aka all information associated with a
domain name can be distributed ... not just the ip-address.

a significant reduction in the ssl handshaking chatter would go away
if in the same response the ip-address was returned to the client by
the domain name infrastructure if the public key (if available) was
piggy backed on the same response ... as part of the standard domain
name infrastructure support.

all the certificate stuff disappears ... all of the browser trusted
key repository disappears ... all the certificate related SSL
handshaking chatter disappears ... poof ... you have both a trusted
ip-address and a trusted public key in a single operation.

Server and Mainframes

cfmtech@istar.ca (Clark F. Morris, Jr.) writes:
Is a Tandem a mainframe? The design philosophy was that anything could
fail and the system would keep on chugging. You could add hardware and
software on the fly at a time when it was an IPL for IBM. I believe
there were other no downtime vendors, one of which was bought by IBM.

at least ibm paid a lot of money to logo another vendor's
product. there was something of an issue in the market place tho with
essentially the same product competing under two different brand
names.

Edward Elliott writes:
I sincerely doubt a significant portion of stolen credit card numbers
come from stealing cookies or hijacking HTTP/S sessions. The targets
are too decentralized for effective harvesting. Huge databases of
credit card numbers kept by ignorant merchants are far more attractive
targets, and we have evidence these have been compromised in the
past. I'm sure even more are taken through plain-old real world
methods that predate the net.

Check the VISA and MasterCard fraud rates (if they release such
figures), but I don't believe there's been any appreciable increase in
card fraud in the past decade, at least in the US.

part of the issue is that the account number essentially is itself
authentication information and therefor must be treated as a
shared-secret. the financial standards x9a10 working group was
given the requirement to preserve the integrity of the financial
infrastructure for all electronic retail payments (not just debit,
or credit, or atm, or just internet or point-of-sale, etc ... but
ALL):
http://www.garlic.com/~lynn/x959.html#x959

does anybody remember the cern cms/tso bake-off report distributed at
share (circa 1974) and that internally copies of the report was
stamped"confidential, restricted" (aka available on a need to know
basis only).

at various points it was thot that share did more vm marketing than
possibly any "marketing" organization ... (aka customer vm uptake at
times seemed to be in despite of the marketing organization).

in any case, if it hadn't been for share ... you might not have seen
the evolution of gml/sgml to html/xml/etc at cern.

the other interesting side-light was that for an extended period of
time, the whole world-wide marketing organization .... operated on
HONE ... which were all VM/CMS (initially starting out on CP/67)
... and the US HONE installation was for a long time the largest
single system data processing complex in the world.
random refs:
http://www.garlic.com/~lynn/subtopic.html#hone

starting with the 370 115/125 a marketing rep couldn't even order a
machine w/o using HONE.

and eventually the whole corporation "ran" on the internal network
.... which was from just about the start until possibly sometime circa
1985, larger than the whole arpanet/internet ... and was predominately
VM.

One of the issues in the internal network .... was by the time mvs
platform got jes support ... the size of the internal network was
larger than the jes node table .... and the various increases in the
jes node table size always trailed the number of nodes in the internal
network. The NJE driver had a habit of throwing away network traffic
that wasn't in the local table ... that eliminated the possibility of
any JES node of ever acting as an intermediate store&forward node on
the internal network. As a result any JES nodes were simply restricted
to boundary nodes on the internal network.

The other problem was that JES made the mistake with NJE of
effectively confusing network issues and jes operation; as a result
incompatible releases of JES could result in bringing down the MVS
system (i.e. a new release of JES with slight modification to NJE
header ... could precipitate mvs sysem crashes if transmitting to
earlier release JES-based systems).

The vm network support ... divorced the transport interface from the
operation of the network ... and essentially from the beginning had
effectively "gateway" capability at each node. A VM node might have
its native drivers (which also had more efficient & higher thruput
than NJE drivers) and possibly one or more NJE drivers.

Frequently a VM node acting as a store&forward node in the internal
network with some number of JES boundary nodes .... would have NJE
header "cleaner code" incorporated into each of its NJE drivers. The
NJE header cleaner code would be specific to the release level of JES
that the driver code was talking to. The point of the NJE header
cleaner code would to be normalize NJE header information that might
have originated from a JES system at a different release or version
level and keep the local MVS system from crashing.

One of the case histories was where a JES/MVS boundary node in San
Jose was crashing MVS systems in Hursley i.e. the San Jose system had
upgraded its JES2 system which was generated slightly different NJE
header information ... and the VM NJE header cleaner code in Hursley
wasn't handling the new variation and letting a glitch get thru. To a
lot of people .... it wasn't viewed as a severe deficiency in MVS,
JES, & NJE .... it was a VM "bug" because VM wasn't keeping the MVS
system from crashing.

in any case, one of the reasons that the internet finally started to
grow faster than the internal network in the '80s was the big internet
switch-over in 1/1/83 to a protocol that actually had the IP-layer as
well as "gateway" (aka ... prior to the 1/1/83 the arpanet/internet
didn't have internet prototol support).

not only did cp/40, cp/67, and vm/370 come out of the cambridge
science center ... but also things like gml and the internal network.

Total Computing Power

"Russ Holsclaw" writes:
Of course, there were many private networks in existence, but not that many
with "high speed" links. A more typical long-haul connection, for terminals
like the 3270, was on the order of 4800 or 7200 bps.

every once and awhile 9600.

there was a bank that had thousands of 3270s at hundreds of branches
with 9600baud multi-drop line (aka multiple 3270s shared the same 9600
line) to each branch). they upgraded to distributed computing model
with something like T1 line to each branch and things slowed down. The
issue was that the 9600baud lines was only shipping the screens to the
3270s ... while the distributed computing model was trying to
continuely ship large amounts of the data to the local branches where
the data then would be munged on before displaying on the screen.

with respect to drop-off in the growth rate of the internal network
.... see the departmental computing threads.

basically late '70s and early '80s saw an explosive growth in
mainframe (and other machines) used in departmental computing
environments. in the mid-80s that market place started to migrate to
high-end PCs and various workstations. The internet treated those
boxes as network nodes while the internal network supported then with
terminal emulation ... not as true network nodes (there may have been
a large number of such machines with internal network connectivity
... but they didn't show up as network nodes ... while they did on the
internet).

cbh@ieya.co.REMOVE_THIS.uk (Chris Hedley) writes:
I think I always end up replying to this subject. :) Although the
full-screen-update on a 9600bps multidrop was slow, the overall
response times when connected to an overburdened 3090 with its
meagre-by-today's-standards 32MB of core was much better than much of
I've seen with today's fashionable technology, ie workstations with
amazingly fast processors and quarter of a gig of core connected to
the remote site using a comparatively fat pipe. I know I've moaned
that a 9600bps multidrop can be slow, but that's only relative to a
nice channel-connected controller.

for the IMS group ... when they got pushed to a remote site out of STL
(but all the dataprocessing equipment remained in the STL machine
room) ... they wouldn't tolerate non-channel connected 3270s.

I got to write the HYPERChannel support for hyperchannel A51x boxes and
the channel driver for the local A22x channel attach box.

Bascially the A51x box emulated an IBM local channel and connected
channel attached controllers. The A22x attached directly to IBM
channel. Between the A22x box and the A51x boxes were HYPERCHnanel
network that included a T1 segment.

This providing what appeared to be local, channel connected 3270
response at the remote site for the IMS group .... and had a
side-effect that the thruput of the IBM mainframes went up. The
side-effect was that the A22x channel box had much more efficient
channel handshaking than the 3274 controllers ... which met that
overall local/real channel busy went down for the same amount of 3270
activity.

I also did the rfc1044 tcp/ip support for the standard mainframe
product ... using the A22x channel interface. I was able to test this
at cray research using a 4341-clone operating a full chnanel speed
talking to a cray machine using very modest amount of the 4341
processor. The base tcp/ip product with the standard channel interface
would come close to saturating a 3090 processor getting 44kbyte
(440kbit) per second.

Killer Hard Drives - Shrapnel?

Charles Richmond writes:
I am still trying to figure out how they can have an
"Interstate" highway in Hawaii... Perhaps it is just
a euphemistic term. Certainly the highway can not
go to any other state...unless some of the Hawaiian
islands want to break away and form a new state.

H1, H2, & H3 on oahu; there are lots of designated "loops" around
major cities that get interstate funds that aren't directly interstate
... although they tend to connect to things that are interstate.

one of the early HONE uses was for testing/demoing 370 software. HONE
started out with 360/67s and CP/67 "H". People could do this from
2741 in the branch office w/o even needing to travel to the nearest
datacenter and getting dedicated 3rd shift machine time (this possibly
somewhat prompted the Hands-On Network Environment acronym).

CSC had CP "L", CP "H", and CP "I". The "L" system was standard CP/67
kernel with lots of internal modifications that hadn't shipped in the
product. "H" had modifications to the kernel to provide (selectable)
both 360 virtual machines and 370 virtual machines (aka
instructions were simulated as per the 370 architecture not the 360
architecture ... as well as the new 370 instructions that didn't exist
in 360). "H" had both 370 non-relocate mode as well as 370 "virtual
memory" mode. "I" kernel had modifications that would run on the 370
virtual memory architecture rather than the 360/67 virtual memory
architecture. The original purpose of the "H"/"I" project was to
develop test envrionment for new 370. Copies of the "I" kernel were
regularly running 12 months before the first 370 engineering machine
with virtual memory was available. In fact, when the first 370 virtual
memory engineering machine was ready to test (this was a machine that
booted by throwing a knife switch), the "I" kernel was used.

After "basic" 370 had been announced ... but before machines were
generally available; one of the early HONE applications was to allow
field people to test software against the "new" architecture ("virtual
memory" 370 mode was crippled because it hadn't been announced
yet). Later when 370 "virtual memory" was announced (and machines
weren't yet available), people in the field had the ability to boot
operating system kernels and test software (note that "H" & "I"
kernels were also being used extensively inside IBM for development
and test).

There was also a CP/67-SJ kernel ... this was the CP/67 "I" kernel
with changes done by the San Jose engineers to support block
multiplexor channel, IDALs, 3330 disk drives, and 2305 fixed-head disks
(i.e. after real 370s with virtual memory hardware became available
internally with 3330s and 2305s and long before vm/370 was available).

Slightly different folklore was that the original development SVS
... was a MVT kernel modified to support virtual memory including a
well-hacked copy of CP/67's "CCWTRANS" (the cp/67 module that handled
virtual to real CCW translation and page fixing) .... starting out
testing on 360/67 virtual memory architecture ... and then modified to
test in CP/67-H 370 virtual machine.

one of the things I was doing was walking the fab. line .... i don't
envy the people that have to work all day in those bunny suits.

the chip has about the same memory as 360/30 but is much faster ...
although there isn't as many i/o devices attached ... but it does
fit in one of those little plastic cards you may have in your wallet.

J. Clarke writes:
Yebbut in industrialized societies the correlation is negative--those
with wealth and social status tend to have small families. And if
they're so dumb howcum they're rich?

i think some UN study in non-industrial societies showed inverse
correlation between female edudation and family size (the more
education the female has, the smaller the family size). wealth may be
somewhat correlated with knowledge ... not necessarily IQ; even little
things like knowing when to plant, how to prepare food, sanitation,
etc.

nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
The IBM 3270. Good for form-filling, but a real pain for
text editing, and abandoned by many customers in favour of things
like DEC front-ends. The solution to that was:

IBM's model of using the PS/2 and OS/2 to run downloaded
client software, NOT under the control of the user. CUA and all
that. Firmly rejected by customers.

X Terminals. Always a stupid idea, they became obviously
stupid when people started using customisable fonts and window
managers started being clever.

at least both the 3270 and x-terminals .... seemed to be a
price/performance optimization .... given relatively expensive (at the
time) processor, memory, disks, software, etc. over time all of those
costs significantly changed .... in effect invalidating the basis for
earlier trade-off decisions.

3272(controller)/3277(terminal) with some local hardware hacks made it
almost livable (at least local channel attached). later 3274
controller that moved much of the hardware out of the terminal (shared
hardware in the controller) ... aggravated various user interface
issues and negated the ability to do hardware hacks in the terminal
(also any networked controller versions were a real pain).

One of the worst 3270 issues was the traditional mainframe half-duplex
i/o model. if you happened to be doing a keystroke at the same time
there was a screen write ... the keyboard locked and you had to stop
and hit the reset key. a simple hardware hack on the 3277 eliminated
that particular bit of nastiness. form-filling was a significantly
more half-duplex paradigm .... better matching the mainframe i/o
model.

when the IMS group were moved out of STL to remote building .... they
looked remote/network 327x controllers and somewhat blanched when they
saw comparison to what they had been used to with local channel
attach. they eventually went with HYPERChannel as a "channel extender"
over T1 link with local 327x controllers at the remote location.

later PCs with 3277 terminal emulation for local connect allowed the
hardware hacks to be done in softare instead.

RSCS started out as cpremote at csc. the later evolved it into
networking protocol. the first production networking connection of
note was between cambridge and endicott working on the CP/67 "H" & "I"
systems (aka the modifications to support 370 virtual machines with
virtual addressing on real 360/67). The "I" system had been running a
year under an "H" systtem on real 360/67 (or in some cases under an
"H" system under a "L" system on real 360/67 ... because of security
concerns at cambridge with the number of MIT, BU, etc students with
access to the system).
http://www.garlic.com/~lynn/2002j.html#70 hone acronym (cross post)

perkin-elmer was sort of interesting to be because they had bought
interdata and were marketing a pcm terminal controller. as an
undergraduate i had worked on a project that created the first pcm
terminal controller using an interdata/3 (later enhanced to
interdata/4 with interdata/3s dedicated as linescanners). we get
blamed for originating the 360 PCM controller market
http://www.garlic.com/~lynn/subtopic.html#360pcm

At 12:09 AM 8/6/2002 -0500, Automatic digest processor wrote:
During those many years, we've enjoyed SHAREing. I'll mention one
partnership, as an example of many. During the early seventies, U
Maine (Orono, ME), the Perkin-Elmer Corp. (Danbury, CT), and Amoco
Research (Tulsa OK) joined forces. We all saw the value and
opportunities of VM and realized that it needed some tweaking and
development that IBM may not be willing to perform. So for the
benefit of each organization, we were connected via a file-transfer
ancestor of RSCS (a derivative of RASP, if I recall correctly). For
several years we collaborated closely, jointly maintaining VM and
developing many hundreds of "mods". Some of these were made available
to the VM community via SHARE, and many or most were either later
incorporated into the product or influenced VM Developers.We did e-
mail before the folks of CUNY talked us into sharing via BITNET.

Pete Fenelon writes:
Difference being Java's a heck of a lot bigger than it needs to be;
p-System was nice and neat.

one might claim that java is in some sense a spring derivative ... and
taligent was a pink derivative (aka apple's object operating system
morphing into taligent ... and sun's object operating system morphing
into java).

Greg Pfister writes:
There's some publically-available evidence of this, other than my own
first-person testimony:

Really going back in time, one of the roots (not the only one) can be
found in a presentation by Justin Rattner (Intel Fellow) in the Fall '98
Intel Developers' Forum, where he discussed through about 5 pages what
kind of I/O was needed to really fill the needs of a large, robust data
center. I/O. Not cluster interconnect; data center I/O.

i have no knowledge of this specific go around .... but I do remember
the gyrations that fiber-channel standard activity went thru at fabric
interconnect layer regarding efforts to support "large, robust data
center I/O" ... aka by any other name half-duplex devices, half-duplex
protocol and half-duplex paradigm. given a fundamental full-duplex
facility .... trying to give that facility the capability of
preventing full-duplex operation in support of half-duplex paradigm
seemed like it was turning into a specification that was as large
as the whole rest of fiber-channel effort put together.

Q: Trust in an X.509 certificate

grk@usa.net (G. Ralph Kuntz, MD) writes:
If I have an X.509 code-signing certificate for a company xyz.com,
signed by a well-known CA, the certificate contains the chain

xyz.com
well-known-secondary-CA-server
well-known-primary-CA-server

To really verify that the key belongs to xyz.com, don't I have to
follow the chain all the way to the top and verify that I know and
trust each intermediate signer (up to the primary CA, which I have in
a hard-coded file)?

What happens if the well-known CA issues a CA certificate to
dasteredly-dan who then uses his CA cert to sign a certificate for
XYZ.com? The chain in the X.509 certificate would then contain

How would I recognize that this is not a legitimate certificate for
the company xyz.com?

is this trust in the integrity of the certificate (aka the actual bits)
or trust in the information contained in the certificate?

a primary purpose of ssl xyz.com domain name certificates .... is
because of trust issues with the domain name infrastructure. the
problem is that to certify some information ... a certification
authority (CA) has to validate the validating of the information being
certified. Typically a CA is not the authoritative agency for the
information certified .... so the CA has to check with the appropriate
authoritative agency. the authoritative agency for domain names (that
CAs have to check with) is the domain name infrastructure ... the very
thing that there are questions about originally giving rise to ssl
domain name certificates.

now there are CA oriented proposals for improving the integrity of the
domain name infrastructure .... allowing CAs to better trust the
information they are certifying. However these methods also go a long
way to improving the overall integrity of the domain name
infrastructure .... and minimizing the need/requirement for ssl domain
name certificates.

aka .... you are missing in the above picture .... the authoritative
agency that a CA-server has to check with as to the validity of the
information being certified (i.e. it can either be a
dasteredly-dan-CA-server or a dasteredly-dan-information-source).

Jim Balter writes:
The amount of code produced by machine language programmers
is comparable to the amount produced by high level language
programmers, unless you're counting output of the compiler
for the latter rather than lines of source code. Even
then the factor is not 100 to 1.

i've done it lots of different ways .... rewritten in assembler that ran
100 faster than previous assembler .... rewritten in REXX that ran 100
faster than previous assembler ... and rewritten in C that ran 100
faster than previous assembler.

in the later two cases (actually in all the cases) ... the assembler
programmer(s) was spending so much time manageing nigling hardware
issues that they weren't paying attention to fundamental scale-up
issues.

the rexx case (interpreted language) ... ok maybe not quite 100 times
... maybe it was only 10 times faster but provided 10 times more
function. basically something like 15k-20k lines of assembler replaced
with some interesting 120 lines of assembler and 3k lines of rexx.

the rewrite in C was "routes" part of airline res system ... which had
been assembler. the initial pass was 100 times faster ... but they
then also wanted ten impossible things that they couldn't currently
do; adding support for the ten impossible things then slowed it down
to only 10 times faster. however even in C there were some nigling
hardware issues ... depending on how one loop was coded met a factor
of three times total difference in the overall thruput because of some
hardware cache stride issues.

Jim Balter writes:
I didn't say anything above about speed. Nor did the original
poster say anything about rewrites -- just that his code
ran "very fast" -- which is meaningless without an explicit referent.

... well at least the rexx rewrite took part time for 3 months elapsed
time .... I don't know what the original implementation time took, but
they had something like 15-20 full time people just maintaining the
code.

EAL5 permits a developer to gain maximum assurance from security
engineering based upon rigorous commercial development practices
supported by moderate application of specialist security engineering
techniques. Such a TOE will probably be designed and developed with
the intent of achieving EAL5 assurance. It is likely that the
additional costs attributable to the EAL5 requirements, relative to
rigorous development without the application of specialized
techniques, will not be large.

EAL5 is therefore applicable to those circumstances where developers or
users require a high level of independently assured security in a
planned development and require a rigorous development approach
without incurring unreasonable costs attributable to specialist
security engineering techniques.

EAL5 provides assurance by an analysis of the security functions,
using a functional and complete interface specification, guidance
documentation, the high-level and low-level design of the TOE, and all
of the implementation, to understand the security behavior. Assurance
is additional gained through a formal model of the TOE security policy
and a semiformal presentation of the functional specification and
high-level design and a semiformal demonstration of correspondence
between them. A modular TOE design is also required.

This EAL represents a meaningful increase in assurance from EAL4 by
requiring semiformal design descriptions, the entire implementation, a
more structure (and hence analyzable) architecture, covert channel
analysis, and improved mechanisms and/or procedures that provide
confidence that the TOE will not be tampered with during development.

EAL6

EAL6 permits developers to gain high assurance from application of
security engineering techniques is a rigorous development environment
in order to produce a premium TOE for protecting high value assets
against significant risks.

EAL6 is therefore applicable to the development of security TOEs for
application in high risk situations where the value of the protected
assets justifies the additional costs.

This EAL represents a meaningful increase in assurance from EAL5 by
requiring more comprehensive analysis, a structure representation of
the implementation, more architectural structure (e.g. layering), more
comprehensive independent vulnerability analysis, systematic covert
channel identification, and improved configuration management and
development environmental controls.