krw <krw@att.bizzzz> writes:
Yep. Google "Check-21". I've gone into stores where they took my
check, scanned it, and handed it back to me. Scary, but no more so
than having a hundred clerks handle my check. With the routing and
account numbers the account is wide open. ...always has been.

most debit cards can now be used in either PIN-mode or signature-mode
(i.e. if they have the association "bug" or "logo" on the card). the
vulnerability is if the magstripe is skimmed ... then the counterfeit
card can be used in signature-mode (similar to credit card) w/o
requiring pin (and typically w/o some of the same protections that
credit cards have). same applies if such a debit card is lost or
stolen. you typically have to specially request a debit card that can
only be used in pin-mode (and not also be usable in PIN-less
signature debit mode).

also credit card magstripe technology had gone thru something of an
evolution. early exploit was to take an account number (or even guess
at an account number using some known rules about account number
validity checking) and generate a counterfeit magstripe from scratch.

a secure hash code was added to credit card magstripes as a
countermeasure for such exploits (basically combination of a bank
secret plus account number and misc. other details ... not obviously
derivable from having an account number).

to large extent, the original PIN-debit didn't view it really
necessary to do similar magstripe protection because they had "real"
two-factor authentication: card/magstripe as something you have and
PIN as something you know. Generating a magstripe from scratch with
some account number wasn't sufficient to do a fraudulent transaction
since a PIN was also required.

in the past year or so, there have been some association articles
deploring the lack of secure hash on debit magstripes ... since
PIN-less signature-debit operation are now subject to some of the
similar vulnerabilities as credit ... and to some extent the
associations had promoted the PIN-less, signature-debit ... w/o
requiring that (PIN-less) debit card magstripe technology was at least
equivalent to credit (originally believing that PIN requirement was
sufficient countermeasure to such exploits).

which has since come to be called e-commerce. existing consumer
protection credit card rules were leveraged related to
"card-not-present" and "cardholder-not-present" (i.e. remote
not-face-to-face transactions that had original been created for
"MOTO" ... aka mail-order/telephone-order).

about the same time the X9A10 work began ... there was a different
effort begun specifically for a chip-based payment card. the
deployments so far have basically had the chip regurgitate effectively
a slightly enhanced version of the information on a magstripe. This
chip also required the entry of an associated PIN ... supposedly
resulting in two-factor authentication ... chip as something you
have authentication and PIN as something you know authentication.

The chip authentication is static data that is very similar to what
might be found on a magstripe. Some of the infrastructure used for
skimming/recorded magstripe information turned out to also be able to
skim/record chip authentication information.

The attackers then installed the authentication information in
counterfeit yes cards. The terminal/chip protocols have been such
that once the terminal had authenticated a chip ... the terminal would
then asked the chip a number of questions:
a) was the correct PIN entered,
b) should the transaction be performed offline,
c) is the transaction with the account's credit limit

The counterfeit yes cards are programmed to always answer YES
(given rise to the yes card label). Theoretically a valid PIN was
required for such an operation (resulting in two factor
authentication), but since counterfeit yes cards always answered
YES (regardless of what PIN is entered) ... any assumptions about
multi-factor authentication is negated (it is not necessary to know
the correct PIN to use a counterfeit yes card for fraudulent
transactions).

Furthermore, one of the countermeasures to various card exploits has
been doing "online" transactions and reporting account problems and
having the card's number flagged/de-activated. However, that is
dependent on the transactions being done online. In the yes card
case, the terminal is always instructed to perform an "offline"
transaction, negating the benefit of online transaction account
flagging.

Anne & Lynn Wheeler <lynn@garlic.com> writes:
there was some attempt to do a cms-like implementation on PCs in the
early 80s ... as well as straight vm/cms as xt/370 ... both the
cms-like and xt/370 suffered greatly on the PC platforms in the early
to mid-80s. cms-like and xt/370 tended to be much more disk intensive
than the PC applications of the period ... and the disks were 10-20
times slower than their mainframe counterparts. the interactive pc
applications of the period were usually carefully tailored to the
available PC resources/hardware. as a result, cms-like and xt/370
genre failed to catch on (although some number of the cms personal
applications were rewritten for pc environment and found uptake).

I don't know about what XXXXXX (if anything) has written. XXXXXX
was an SE in LA who was also responsible for VMAP. He has left
the company and formed his own VM consulting company.

I do know many of his opinions are somewhat similar to MIPENVY,
although he worked in a very different environment. MIPENVY was
written by Jim Gray who worked here at research. He was very
instrumental in System/R and somewhat of an authority on data base in
general and distributed data bases in particular. He has gone to work
for Tandem. While he was here he did a lot of consulting with the
STL/IMS design people. MIPENVY was a short piece of a much longer
letter that he wrote at the request of his manager detailing numerous
things about with IBM in general & IBM research in particular. I have
not gotten any feedback on how far up his letter has gone so far.

Jim Gray has had a high degree of exposure both inside of IBM &
outside. For whatever reason he has been telling people (with respect
to IBM questions) to call me in his place. Just before he left, I had
lunch with him & STL/IMS design people. He suggested that they should
now come to me (IMS must be in deep hurt, what I really about O/S
in-depth, is 10 years old). I've also gotten calls from BofA
management about System/R, VM, & data base stuff.

BofA now has one of the original IMS design people as head of
computing. They are hiring a number of the good IMS people out of STL
(or where ever they can get them -- rumor is they have or will have
larger IMS development group than IBM). STL also is feeling very
pressured by the Japanese. Claim is that the Japanese IMS is much
better than IBM's. STL has crash program to implement enhancements to
IMS to bring it up to the current Japanese level. Only problem is
that FCS is targeted for 1985 (although there are some number of bets
out that it will slip).

I was down in LA in June for a customer call at LA Times & spent most
of the evening with XXXXXX. He was very unhappy with the way he saw
IBM going at that time. Too much pressure from the branch to sell MVS
among other things. He has a Radio Shack computer at home & believes
that there ought to be a crash program to get most of the CMS function
into a user's terminal. Other companies are getting very close. In the
next couple of years there is going to be a lot of pressure from that
direction.

re: MIPENVY script; while I was in POK last week teaching a
performance and scheduling class to the VM development group and
change team, Jim Gray departed IBM for Tandem. He left a goodbye note on
my terminal. There was a cryptic remark about some new project that
will seriously affect IBM. Knowing Jim Gray, it was not just sour grapes
leaving this company. Considering all the proto-type projects that
lots of people have been doing for several years with multiple
(relatively) "small" processors, both tightly & loosely coupled it is
surprising that nobody has come out with something sooner to seriously
impact glasshouse, mainframe market.

Anne & Lynn Wheeler <lynn@garlic.com> writes:
they had tried running MVS on processors used for "testcell" testing
(engineering and development devices) and MTBF was on the order of
15-minutes (crashes and/or hangs) ... and they had been doing the testing
with machines in scheduled "stand alone" time (with simple, custom
written stand-alone monitors).

In the past, I've made reference to earlier attempts using MVS for
"testcell" operation ... being able to test engineering hardware under
development in an MVS operating system environment ... and MVS
experiencing a 15min MTBF.

I had done a I/O supervisor redesign and rewrite to provide an
operating system environment for on-demand, concurrent testing of
engineering and development hardware (in bldg. 14 & bldg. 15):
http://www.garlic.com/~lynn/subtopic.html#disk

following is simple reference to preparing to release product
3380 hardware and testing their operation under MVS.

even tho the above email was purely internal and never was avail.
outside the corporation ... it still resulted in the manager of MVS
RAS generating quite a bit of uproar (something along the line of
trying to kill the messenger?)

Anne & Lynn Wheeler <lynn@garlic.com> writes:
We had added a new facility to the SJR/VM in 1979 (module DMKCOL)
enabling capturing all disk record accesses. This information was then
used in modeling various kinds of caching strategies. It was run on a
number of systems in the san jose area (including some having batch
operating systems running in virtual machine).

One of the findings was that a system global cache (i.e. with global
LRU replacement policy) outperformed any partitioned cache strategy
(aka effectively local LRU replacement strategy) ... where the
aggregate amount of electronic cache was the same in the two cases.

One of the other pieces of information that started to emerge from the
modeling work was finding some amount of meta-activity ... that
specific collections of records were frequently accessed in longer
term cyclic pattern (once-a-day, weekly, monthly, etc). This started
to have implications as CMSBACK morphed and added much more
sophisticated filesystem management strategies.

some work was also done about being able to use the real-time capture
characteristic of DMKCOL to aid with real-time record allocation.

I work in Aids Development in Poughkeepsie in VM modeling &
measurement areas (VMPR, SMART, VMAP). Recently, we have been
investigating cache dasd and heard about some mods you made
(presumably to IOS) which collects and logs 'mbbcchhr' information.

We have received a hit ratio analysis program from XXXXXX who informed
us of your work. The point is that we would like to make a package
available to the field, prior to fcs, which would project the effect
of adding a cache of a given size. Can you give me your opinion on the
usability of such a package. I am presuming that most of the work
involves updating and re-loading cp...I would like to take the code
and try it myself...can it be run second level?? Appreciate your
response...

CP mods. include a new module (DMKCOL), a new bit definition in the
trace flags, a couple new diagnose codes, a new command, and a hit to
DMKPAG (so code can distinguish between cp paging and other I/O) and
a hook in dmkios. no problem running code 2nd level.

1) collected data is useful for general information about I/O
characteristics but there are a lot of other data gatherers which
provide almost as much info (seek information, but not down to the
record level).

2) I guess I don't understand the relative costs for an installation
to justify cache. I would think in most cases a ballpark estimate can
be made from other data gatherers. It would seem that unless the cache
is going to be relatively expensive this may be something of an
overkill.

3) From the stand point of impressing a customer with IBM's technical
prowess the hit-ratio curves is a fantastic 'gimmick' for the
salesman. Part of my view point may be based on having made too many
customer calls, I've seen very few decisions really made on technical
merit.

4) Hit-ratio curves may be in several cases a little too concrete. An
account team will need additional guidelines (fudge factors) to take
into account changing load (/growth).

Will send code. The updates are currently against a sepp 6.8/csl19
system. dmkios update should go in w/o problems. updates for
new diagnose and command you will have to adapt to your own system
(command & diagnose tables have been greatly changed).

DOS C prompt in "Vista"?

Pascal Bourguignon <pjb@informatimago.com> writes:
The next change had something esoteric to do with save-area chaining
conventions -- again, for the sake of conventions and to keep the dump
analysis tools happy.

Note that the "null program" has tripled in size: both in terms of the
number of source statements and in terms of the number of instructions
executed!

there was APAR/fix that had to do with attributes specified in the
linkedit step (there was recently long thread in mainframe n.g. on
various meanings of rent, serial reusable, etc). if a copy of the
program was already loaded ... could it be directly used or did it
have to be reloaded from disk.

jmfbahciv writes:
Someday just watch the guy ahead of you play with the swipe
machines. Notice when something goes wrong. Invariably,
the card ends up getting swiped again, and sometimes a third
time. You don't need to manufacture a fake.

the data breach and security breach operations frequently basically
use the similar scenario ... but instead of attacker doing real-time
recording from a compromised terminal ... collect the recorded
information from transaction logs. frequently there is quite a bit of
effort to disguise the fact that the transaction logs have been copied
... since when it is discovered, all the affected account numbers are
frequently deactivated and cards re-issued (which may be a $10-$20
expense per account, aka not just mailing the new card but all the
associated data-processing, administrative and notification activity).

there have been a number of recent "new year" stories about having
recently hit 100million in the aggregate number of accounts that have
been involved in recent breaches. Phishing and various computer Trojans
have been other mechanisms for harvesting information that enables
fraudulent transactions.

some of the "new year" breach stories:

Encryption a perfect response to the Year of the Breach
http://scmagazine.com/us/news/article/623768/encryption-perfect-response-year-breach
Bots, breaches and bugs plague 2006
http://www.securityfocus.com/news/11432By the numbers: A dismal year for data breacheshttp://blogs.zdnet.com/BTL/?p=4169
VanBokkelen: 2006: The year of the breach

Gift Cards have a different kind of skimming vulnerability ... where
crooks record numbers of unsold cards at stores and then return later
to see which ones have been activated (which they promptly attempt to
empty). there have been some recent stories that this is a new exploit
just this year ...

Three gift card scams take value from your presents
http://www.twincities.com/mld/twincities/living/16267723.htm

jmfbahciv writes:
Someday just watch the guy ahead of you play with the swipe
machines. Notice when something goes wrong. Invariably,
the card ends up getting swiped again, and sometimes a third
time. You don't need to manufacture a fake.

in traditional credit card scenario ... there is a merchant financial
institution that is financially responsible for the merchant (acquirer)
and a consumer financial institution that is financially responsible
for the consumer (issuer). the transaction goes from the merchant
terminal to the acquirer and then to the issuer.

the issuer processing frequently includes various kinds of account
specific fraud detection patterns ... like calling you up if
particularly suspicious transactions are going on for an account.

in similar manner, the acquiring processing will also be looking for
merchant (and/or merchant terminal) fraud patterns. a terminal doing
duplicate transactions (a really simple replay attack scenario)
and/or multiple transactions against accounts might not last a day
... and the fraudulent transactions not even get posted for
processing. A duplicate transaction (in a merchant terminal scenario)
can be fairly easily recognized ... and the processing would
eventually result in duplicate credit being posted to the merchant
bank account. depending on when the fraud is recognized ... such a
credit might not even get scheduled ... or if it is performed, it may
be reversed in straight-forward manner. furthermore depending on
relationship and standing between the acquiring financial institution
and the merchant ... postings might actually be delayed several days
(and/or go into some sort of impounded account).

some of the reasons that a compromised terminal is frequently simply
used for recording data ... and the actual fraudulent transactions
happen as far away and widely dispersed as possible.

SSL info

"UKuser" <spidercc21@yahoo.co.uk> writes:
I'm going to be working with some SSL pages (php) and wondered if there
were any good design/development sites for security tips etc so I miss
out on making the "obvious" blunders - whatever they may be.

I've found: http://blogs.msdn.com/ie/archive/2005/04/20/410240.aspx
which is very good and lists two possible problems. Here then is the
newbie question.

If a form is hosted on a HTTP (non secure) site and points to a HTTPS
in the action tag, does this mean that the page has already made the
SSL connection/handshake? Does the browser recognize the potential for
a HTTPS connection and therefore do the same as if it was a full SSL
page?

Secondly, why is mixed content so bad (any sites would be great)? I
appreciate various elements could be secure/unsecure but how would that
pose a risk?

originally SSL was suppose to address two issues 1) are you really
talking to the server that you think you are talking to and 2)
encryption (hiding) of transmitted information.

for #1, the user typed in the URL of the server they wanted to talk
to, the server returned a SSL domain name server digital certificate,
the browser validated the digital certificate and then compared the
domain name in the user supplied URL with the domain name in the
digital certificate.

fairly quickly a problem cropped up ... merchants discovered that
using SSL for the complete processing cut their processing thruput by
80-90percent ... so they restricted SSL for just the checkout/payment
processing. So now a user enters a non-SSL URL ... which doesn't check
to see that the server that the user is talking to, is really the
server that the user thinks they are talking to.

the users click on a server provided button ... which supplies the
(SSL) URL. In this situation ... rather than checking that the server
is the server the user thinks they are talking to ... the only thing
it does is checks that the server is whoever they claim to be
(i.e. the server provides both the URL with a domain name as well as
the digital certificate with the domain name). it would take a fairly
inexperienced to claim to be one server and not be able to provide a
digital certificate that substantiates that claim. this is also what
is behind some of the Phishing emails that can provide (SSL) URLs to
click-thru on ... where the attacker provides both the URL and any
digital certificate that supports that they are who they claim to be.

there is separate catch-22 scenario that certification authorities
are looking at for improving the integrity of the domain name digital
certificates that they issue. currently they require a lot of
identification information as to the applicant for the digital
certificate. they then go thru a time-consuming, costly, and error
prone processing of cross-checking that the provided information (by
the digital certificate applicant) matches the information on-file
with the domain name infrastructure as to the owner of the specific
domain.

the proposal is for having domain name owners provide a public key to
the domain name infrastructure when they register the domain name.
now the certification authorities can require that digital certificate
applications be digitally signed. Now the certification authorities
can do a real-time retrieval of the on-file public key (from the
domain name infrastructure ... analogous to what they do now when they
do real-time retrieval of information as to the owner of the domain
name for matching) ... and use it to validate the digital
signature. This turns a time-consuming, error prone, and costly
identification matching process into a much more reliable, simple, and
less expensive authentication process.

the catch-22 is that if the certification authority can do a
real-time retrieval of the on-file public key for digital certificate,
then potentially the rest of the world can also ... eliminating
the need for the digital certificates ... misc. past posts mentioning
the catch-22http://www.garlic.com/~lynn/subpubkey.html#catch22

Justa Lurker <JustaLurker@att.net> writes:
The 'PC' platform has put useful computing power in the hands of
millions more people and organizations than your beloved PDP-10,
Lynn's beloved VM/CMS, etc. ever did or could have. Those were fine
systems in their own right, but like fishing stories and old
girlfriends, recollections and perceptions automagically improve with
age.

"The Elements of Programming Style"

stanb45@dial.pipex.com (Stan Barr) writes:
Hardware Forth implementations typically provide only a CALL/RET and
some sort of IF, ELSE, NEXT construct for loops - forms of JMP - but
not usually any programmer usable JMP, although it's normally possible
to simulate one if you feel the need.

Forth is an example of a low-level goto-less language. It's easy enough
to write a GOTO but I've never seen it used except as a JMP in assembler
for conventional processors.

"I have been a maverick programmer for 25 years, constantly at odds with
conventional wisdom. I developed the FORTH programming language to express
the creativity of the expert. It remains unparalleled in efficiency, brevity
and versatility."

"Countless applications and thousands of FORTH programmers later, we finally
obtain hardware that can match the software. The FORTHchip boasts an elegantly
simple architecture for the ultimate in programmability and throughput."

Thursday, November 29, 1984 4:00 p.m.

5M Conference Room
1501 Page Mil Road
Palo Alto, CA 94304

NON-HP EMPLOYEES: Welcome! Please come to the lobby on time so that you may
be escorted to the conference room.

moving on

Anne & Lynn Wheeler <lynn@garlic.com> writes:
For relocate shared segment support, a shared segment may appear
anywhere within a virtual machine's address space (it does not need to
be at the position specified in the VMABLOK). The way I handled it in
DMKVMA was to use the PTO pointers in the VMABLOK to check the shared
segments, rather than using the segment index number to displace into
the segment table and pick-up the STE.

One of the co-op students that helped me write the original shared
segment support for release 2 VM (included the sub-set that is now in
the product DCSS) is now with Interactive Data Corporation (IDC). They
have taken the idea and put a whole group on expanding the idea. They
now call it Floating segments (instead of relocating segments). They
have a modified assembler for generating adcon free code and are
working on the compilers. All this work they have done has greater
significance than they realize. It would greatly simplify conversion
to an increased address space size.

"XXXXXX" was one of the two original people at the Los Gatos lab.
responsible for mainframe pascal. He went on to be vp of
software development at MIPs and later showed up as general
manager of the SUN business unit responsible for the fledgling
JAVA.

re: relocating shared code; XXXXXX thinks that he can have Pascal/VS
not using relative adcons and also "compile" code that doesn't have
relative adcons. Adcons are generated as absolute, the target address
minus the base address of the module. The loader supports both
positive and negative displacements. When it comes time to transfer
control the routine picks up the absolute adcon and adds in the value
of the base register to resolve the relocated address. Code can then
be generated in shared modules (ref: shared modules, RJ2928) and the
CMS loadmod upgraded to load code into available segments (of course
the same procedure works for non-shared modules also).

I have an old t-shirt that some people at Amdahl were distributing
when vm/sp initially came out; it has a particularly gory looking
vulture labeled vm/sp and a line underneath it says vm/sp is
waiting for you.

i have been working on bringing up vm/sp rel 1 on a 4331 processor
here at the sci. center. i spoke with XXXXX this morning and he
mentioned your name and your feelings about sp. i felt i should tell
you about some of our problems etc.

first of all, they changed the rbloks copy and redefined redevstat
to rdevsta4. that proved to be devastating to us because we run some
software that used the field.

there's a ctca bug in sp1.
pvm gets into infinite loops
the system never was able to stay up one night !!
as you know, the nuc has grown to well over 240-260k for a large
system. that is intolerable on 4300 processors. i cut out
parts of cp and i have it down to 196k. (up from 170k)

about the only thing that works right is XXXXX's SPM stuff!

i am going to go back to release 6 plc 11 tonight. i am putting vm/sp
aside for awhile until a few plcs are out.

making cp67 available to lincoln labs in apr67 ... and then it was
released to customers in may68. however, between apr67 and may68 it
was also installed the last week in jan68 at the univ. where i was
undergraduate (and already responsible for the production os/360
system). i then got to play with cp67 (mostly on weekends). I was
asked to participate at the product announcement at the spring share
meeting in houston.

re: MIPENVY script; while I was in POK last week teaching a
performance and scheduling class to the VM development group and
change team, Jim Gray departed IBM for Tandem. He left a goodbye note on
my terminal. There was a cryptic remark about some new project that
will seriously affect IBM. Knowing Jim Gray, it was not just sour grapes
leaving this company. Considering all the proto-type projects that
lots of people have been doing for several years with multiple
(relatively) "small" processors, both tightly & loosely coupled it is
surprising that nobody has come out with something sooner to seriously
impact glasshouse, mainframe market.

Lynn, SPD mgmt. has agreed to pursue your generalized solution
to the Q-DROP problem described in apar vm11293 per your suggestion
at our meeting in Pok. on 10/1. We look to you to provide the following:
1) a general description of the function including problem
description and proposed solution
2) list of modules/macros impacted with a brief
description and an estimate on LOC to be added/changed
3) unit tested code on an SP1 base.

Endicott will run regression and performance tests and coordinate a plan
to XMIT the code to the field.

We also enlist your aid in diagnosing problems that may occur during our
tests (normally remotely, but on site if required), reviewing our perf.
runs, and assisting with any Pubs changes.

Please call me on tie line xxx-xxxx as soon as possible in order that I
may understand any requirements you have to provide the above. We would
like to have items 1 and 2 by 10/10 and 3 by 10/17 if possible.

re: loagwait fix; will have code & a couple paragraphs either today or
tomorrow. Will be against a release 6, ltr 11 system. Have updates for
sp system but they were prior to permanent application to base and
resequencing. May be sometime next week (or later) before I can get
ahold of SP source for modules affected and have sequence nos.
converted. VMBLOK hit is immaterial, should be able to do it there,
all that is required is space obtain out of currently reserved fileds.

vm/sp1

Anne & Lynn Wheeler <lynn@garlic.com> writes:
I have an old t-shirt that some people at Amdahl were distributing
when vm/sp initially came out; it has a particularly gory looking
vulture labeled "vm/sp" and a line underneath that says "vm/sp is
waiting for you".

been checking latest vmshare on SP1.07 & sp1.08 problems. While i was
at it thot to go back and check the SP1.05 & SP1.06 comments that we
have already from our distributed vmshare (almost month old). Listed
is recommendations to pull 11780 & 12111 to dmkcns which lead to prg5s
(updates are 1.02). Also listed are recommendations to pull 11439,
118841, 12448. Finally is comment to pull 13311 (1.06) to QCN which
leads to bad problems with attached graphics on 3277. Finally is
description of problem in DMKRGA which leads to PSA004 abend because
code runs off the end of NICBLOK and uses bad area for VMBLOK (this
last is open APAR 14???).

re: sjr system; have SJR 3081 up and running at the SCH level ... I've
slightly re-arraigned the SJC2 muxfile to group the Group fair share
changes with the rest of the SCH changes. Also have fixed misc. bugs
in other updates. Started the process of enabling the majority of the
SJC2 updates up thru the SPOOLMAX updates (there is a complete TODO
level assembly console log with filename TODO5B already out on the 109
disks). A whole slew of files are waiting at SJRL destined for LSGVMB
... but the connectivity between bldgs 28 & 29 is down at the
moment. Hopefully those will make it thru on Monday. Still have some
hardware problems with this 3081 (which will have to be cleared up on
Monday). Reasonably good chance of getting majority of the rest of the
updates activated by January 1st.

SSL info

Ertugrul Soeylemez <never@drwxr-xr-x.org> writes:
That also helps exploiting the full potential of SSL. For example, you
can't just authenticate the server, you can also authenticate the client
or anything else via a certificate. So no more username/password pairs
are needed. Users don't need to login manually, they just present their
certificate, which is straightforward in today's modern browsers.

the first place that became especially evident was when we were
brought in to do some consulting with this small client/server startup
that had this technology called SSL and wanted to do payment
transactions ... for something that has since come to be called
e-commerce. previous post in this thread:
http://www.garlic.com/~lynn/2007.html#7 SSL info

at the time SSL didn't have mutual authentication ... but we required
it for the payment gateway (the webservers authenticated the payment
gateway using installed public key ... and the payment gateway
authenticate the webservers using on-file public keys). The addition
code had to be added to SSL to do mutual authentication and since it
was already heavily certificate-based orientation there still were
digital certificates that were passed back-and-forth ... but in
actuality ... each authorized webserver had the information about the
payment gateway preloaded as part of the payment processing software
... and the payment gateway had onfile information about each
authorized webserver. it quickly became strikingly apparent that
the digital certificates were redundant and superfluous. misc.
other posts mentioning ssl digital certificates
http://www.garlic.com/~lynn/subpubkey.html#sslcert

so the dominant forms of client authentication in the world wide web
environments are KERBEROS and RADIUS. These started out being
userid/password. However, both KERBEROS and RADIUS have had
definitions and implementations were client public keys are registered
(in lieu of passwords), servers transmit some random information (as
countermeasure to replay attacks), and the clients (using their
private key) digitally sign and return the digital signature ... which
the server than verifies with the onfile public key.

The original KERBEROS PKINIT (public key) draft initially just
specified certificate-less operation ... but under a great deal
of lobbying, certificate-mode operation was also added.

One of the scenarios for various webserver software is that client
authentication has frequently just been a stub model ... although
there are plugins for webserver software that provide KERBEROS and
radius interfaces for client authentication. In many of these typical
implementations ... the KERBEROS and radius implementations are done
in such a way that it is possible to specify password or digital
signature operation on a account by account basis ... again
certificate-less operation. misc. past posts about KERBEROS operation
http://www.garlic.com/~lynn/subpubkey.html#kerberosand radius operation
http://www.garlic.com/~lynn/subpubkey.html#radius

sort of the original idea for certificate-mode of operation was that
there was interaction between two parties that had no prior knowledge
of each other. it was necessary for the certificates to carry all the
necessary information. this also sort of gave rise to the x.509
identity certificates from the early 90s.

however, by the mid-90s it was realized that x.509 identity
certificates, typically heavily overloaded with personal information
represented significant privacy and liability issues. as a result,
you saw many institutions dropping back to what they called
relying-party-only certificates ... misc. past posts mentioning
RPO certificates:
http://www.garlic.com/~lynn/subpubkey.html#rpo

the issue here is that an RPO-certificate basically just carries a
public key and some sort of account number. the relying party, takes
the account number and retrieves the appropriate account record
... where the actual personal information including any authorization
information is kept. however, since the public key originally had to
be registered and stored in an account record before any digital
certificate was issued, it was trivial to show that the relying party
retrieving the account record can include the registered public key.
Again, the digital certificate becomes redundant and superfluous.

basically the kerberos and radius PKI implementations with digital
certificates typically have done something similar as RPO-certificate,
the permission/authorization information is stored in an onfile
account record. if any of the client PKI digital certificates
authentication operations (kerberos, radius, or roll-your-own) at some
point have to retrieve any sort of account record as part of
permissions and authorizations ... it is then trivial to also show
(again) that the digital certificates are redundant and superfluous.

The digital certificate actually only has meaningful role ... when the
server never has to resort to really knowing who the client is and/or
anything about the client ... other than what is presented in the
digital certificates. No additional information related to things like
permissions, authorizations, billing, accounts, etc is required.

its like anybody with valid digital certificate can log into your
system and have all permissions regardless of who they are ... and
there is no userid account record for the system that is required to
provide any additional information related to the user and/or what the
user is allowed to do or not do.

as soon as the process has to resort to any additional information
(than what is provided in the digital certificate) for doing any
operation what-so-ever for the client ... then it is trivial to
show that the digital certificate is redundant and superfluous.

SSL info

Anne & Lynn Wheeler <lynn@garlic.com> writes:
however, by the mid-90s it was realized that x.509 identity
certificates, typically heavily overloaded with personal information
represented significant privacy and liability issues. as a result,
you saw many institutions dropping back to what they called
relying-party-only certificates ... misc. past posts mentioning
RPO certificates:
http://www.garlic.com/~lynn/subpubkey.html#rpo

using that comparison, then there is the possibility that all personal
information would be eliminated from the passport chips ... for
similar privacy and liability reasons that resulted in change-over to
relying-party-only certificates in the mid-90s (and away from x.509
identity certificates frequently overloaded with personal information)
http://www.garlic.com/~lynn/subpubkey.html#rpo

it was also in this period that several people made claims that
upgrading financial transactions with client/consumer digital
certificates would bring retail financial transactions in the modern
era.

the issue here (as in the passport case) is that credentials and
certificates are constructs developed for providing trusted
information for an offline environment. in the 70s, electronic payment
networks made the transition from the offline environment to the
online environment ... and supported real-time information regarding
authentication and authorization. digital certificate-based offline
paradigm for financial transactions, rather than representing any
modernization, would result to reverting to pre-70s paradigm.

it was in this period that we also coined the term comfort
certificates ... the redundant and superfluous use of stale, static
digital certificates (an offline paradigm construct) in an online
environment. The comfort certificates provided familiarity and
comfort to mindsets that were stuck in the old fashion offline
paradigm (which required credentials and certificates to provide
trusted information distribution) ... and had difficulty making the
transition to an trusted online integrity paradigm.

our repeated observations about the offline digital certificate model
actually regressing effective operation by several decades (rather
than representing any modernization) was some of the motivation behind
OCSP (online certificate status protocol). However, our observation
was that it was really a rube goldberg fabrication ... given any
operation ... what is more valuable: ... 1) a real time transaction
involving real time authentication and authorization information
... or 2) a real time transaction providing status indication about
stale, static digital certificate information.

i.e. adding chips to payment cards for use in retail transactions.
there were some number of claims that adding the chips even increased
the vulnerabilities ... compared to a similar magstripe card w/o a
chip.

lynn@GARLIC.COM (Anne & Lynn Wheeler) writes:
i.e. "3090" service processor was a modified version of vm370
release 6 running on a pair of 4361 processors, most of the
screens/menus written in IOS3270. Part of this was the result of the
experience with the 3081 service processor where all of the software
was totally written from scratch (trying to get to some amount of
off-the-shelf stuff).

minor folklore ... my dumprx was selected for use as diagnostic
support for the 3090 service processor ... in the early 80s, i had
(re-)implemented large superset of IPCS function totally in rexx
http://www.garlic.com/~lynn/submain.html#dumprx

re: dumprx; DUMPRX is an experimental IPCS written in rex. It
currently supports only CP storage and makes use of local S.J.
research extensions to the COMMON 'LOCATE CP' command (for live
system). Currently being planed is an exec to extract label & format
information from a specified MACLIB.

re: dumprx exec; does anybody know where the machine copy of the CP
abend codes are? &/or how to obtain them?? SP HELP appears to have the
"messages" portion of the MESSAGES & CODES manual ... but i didn't see
the abend codes anywhere.

I have got the interface for processing PRB files nearly done. Have
the LOCATE, RIO, & VIO commands yet to go. I also need to improve its
interface to the REX interpreter for returning information. The
program is called DMPRXX & is NUCXLOADed.

Except for FSX & MOVEFILE, DUMPRX will completely run in subset mode.
I haven't tried pre-NUCXLOADing FSX, which may solve that problem.
MOVEFILE is used to extract a member from a maclib for doing a
formated control block display. If IOX had a BLDL function, it would
be possible to extract a member with FCOPY using the BLDL values. I
could resort to dummying up a standard EXEC & calling EXSERV BLDL.

One of the major features that DUMPRX will have over the standard
DUMPSCAN is the ease of writing new dump analysis extensions &/or
formating routines (example is DUMPRXB EXEC which does a formated
display of storage giving instruction op-codes). It will be possible
to invoke the editor from DUMPRX, write a new EXEC (&/or modify an
existing one), return to DUMPRX ... & then invoke the procedure.
Simplest method to "extend" the DUMPRX command language is to require
the user to type:

'CMS EXEC filename anyargs'

which would then invoke the new EXEC. That would loose the symbolic
name capability tho. Anybody have suggestions for a command language
syntax solution?

Lynn,
I am in the process of changing jobs from Charlotte to the
IBM/ACIS and Cornell Supercomputer Facility at Cornell. I understand
that there will be an overview of HSDT to the Cornell staff tomorrow
as a possible additional project for T2-T3 speed connections to NSFnet
etc. My new job will be to understand the Cornell etc Network
environment as it relates to the Supercomputer facility (including
WISCNET , TCP/IP , CSNET, etc etc).

My background has been system support for MVS/Jes2 , VM/SP, RSCS,
PVM, ACF/VTAM, 3705 EP/NCP etc. I was in the VNET TSG. Background
mainly networking for the last 7 years.

With all the intro done, Could you send me any documentation on
HSDT that you might have. While I was in Hursley you sent several
documents to XXXXXX, but my copies were lost when my Hursley ID was
cancelled.

from above:
"There are two career paths in front of you, and you have to choose
which path you will follow. One path leads to promotions, titles, and
positions of distinction.... The other path leads to doing things that
are truly significant for the Air Force, but the rewards will quite
often be a kick in the stomach because you may have to cross swords
with the party line on occasion. You can't go down both paths, you
have to choose. Do you want to be a man of distinction or do you want
to do things that really influence the shape of the Air Force? To be
or to do, that is the question." Colonel John R. Boyd, USAF 1927-1997

From the dedication of Boyd Hall, United States Air Force Weapons
School, Nellis Air Force Base, Nevada. 17 September 1999

DOS C prompt in "Vista"?

Eric Sosman <Eric.Sosman@sun.com> writes:
Not a clue. Atex still exists as a company, but has gone
through multiple changes of ownership and focus and apparently
has little resemblance to its 1970's and 1980's self. I doubt
that they're still selling systems based on PDP-11 hot-rods.

ROCHESTER and WHITE PLAINS, NY, June 15 . . . Eastman Kodak
Company and IBM Corporation today announced an alliance to develop an
open publishing systems architecture and a new generation of
integrated, enterprise-wide publishing systems for newspapers and
magazines worldwide.
Under this alliance, Kodak's Electronic Pre-Press Systems,
Inc. (EPPS) subsidiary, particularly its Atex Publishing Systems
units, and IBM's Media Industry Marketing intend to combine their
technological expertise to establish and support a publishing systems
architecture based on open industry standards.
This architecture will enable publishers to integrate their
pre-press and business operations into an enterprise-wide publishing
solution. Pre-press operations include the editorial, advertising and
production activities that go into creating a newspaper or magazine.
Business systems include circulation, finance, management reporting,
and credit checking and billing for advertising.
IBM will provide technical, marketing, development and financial
resources to this endeavor and will play an active role in strategic
and operational activities. EPPS will provide its publishing-industry
and applications-software expertise. Other terms of the agreement
were not disclosed.
Kodak's John White, vice president and general manager,
Integration and Systems Products Division, said, "The alliance with
IBM will enable us to focus on imaging and publishing systems
software, which is where we can add value for our customers. It's
clear we both bring much to this alliance and have a lot to gain from
the partnership."
Mark Elliott, vice president, General and Public Sector
Industries at IBM, said, "Marrying IBM and Atex technologies clearly
positions us to build on our international presence with newspaper and
magazine customers by delivering state-of-the-art publishing systems
and participating with the industry in the development of open
standards."
EPPS President David Monks said, "What publishers are looking for
are ways to integrate their pre-press and business operations to
better manage growth and change."
"Through this new alliance, we will offer the architecture on
which those enterprise-wide solutions can be based, and a variety of
systems to meet specific pre-press and business requirements," Monks
said.
Jonathan Seybold, a leading observer of publishing systems
technology, said, "The industry has been looking for this kind of
leadership around open systems architecture to stimulate new creative
publishing solutions. This Atex/IBM alliance should be able to
deliver the key products and solutions needed by the industry."

jmfbahciv writes:
Yes. I understand completely. It is very difficult to choose
the todo path because nobody who has their standing in society
in mind will allow you to keep doing. Rewards for doing a good
job is invariably a promotion which moves you out of the pay
level where the real work is done.

from long ago and far away, I had done IPCS superset written in rexx
(when it was still called rex and hadn't been release as a product)
... which was initially line-mode CMS commands ... recent post
mentioning dumprx (and old email from 1982)
http://www.garlic.com/~lynn/2007.html#18 IBM sues maker of Intel-based Mainframe clones

In 1976, the vm development group in the old SBC bldg. in Burlington
Mall were told that they had to all move to POK to work on supporting
MVS/XA development and there would be no more/new VM releases.

The (old) vm development group would be responsible for a new internal
only virtual machine tool ("VMTOOL", that would never ship as a
product) which was purely dedicated to MVS/XA development. Apparently
corporate hdqtrs had been convinced that it was necessary to kill off
vm370 in order for mvs/xa to be developed.

Endicott managed to salvage some of the situation and continue with VM
product releases.

NOTE: "VMTOOL" is different from "VMTOOLS" ... "VMTOOL" was the
internal only virtual machine facility supporting MVX/XA development;
"VMTOOLS" was an network information and software distribution
facility as well as computer conferencing, available on the internal
network (implemented using TOOLSRUN)
http://www.garlic.com/~lynn/subnetwork.html#internalnet

supporting operations that included "mailing list" type operation as
well as mechanism more akin to "usenet" news.

re: UofM per; oh yes, the person who wrote the UofM per joined the
VMTOOL group about 2-3 yrs ago. He wrote new PER support for the
VMTOOL that has all the functions of OET. A major enhancement is that
the VMTOOL has what is called CP EXEC files. Since the major purpose
of the VMTOOL was going to be a MVS development vehicle ... the
delivery of computing services had to be done primarily within the CP
environement. The result was that an EXEC type processor and a new
type of spool file was created. Valid CP commands now can be one of
these CP EXEC files & as a result the type of things that can be
invoked when a PER event occurs is much more sophisticated (i.e. a PER
event can be the execution of any CP command ... which in the case of
VMTOOL may be a CPEXEC file with lots of conditional testing logic).

re: page migration; I've significantly rewritten the logic in DMKPGM
... to include among other things the use of multiple page buffers.
Biggest problems with the current implementation are 1) release 4 AP
upgrade was incorrectly done, resulting in DMKPGM execution be serial,
rather than concurrent (i.e. possible to have several invokations of
PGM execution going on at the same time) and 2) only one physical page
buffer is used per invokation (i.e. I/O is done sequentially one drum
I/O followed by one disk I/O, and then the next drum I/O ... elapsed
time to perform migration on large system can exceed 20 minutes).

A similar vm370 extended PER implementation had been done in 1980 (for
internal vm370 installations) "DMKHSL" ... by the same person that had
done parasite & story.

Date: 06/09/80 14:43:07
From: somebody at WINH5
To: wheeler
You can try this DMKHSL if like living dangerously -
The source is set up to use
VMUSER1
as a pointer to an IFBLOK chain but any spare word in the VMBLOK
will do - it's only referanced in PRG and DMKHSL.
You will need to add a new entry to CFC for
"IF" and "WHEN" calling DMKHSLEN --- class G
It hooks into the existing PER mods
There is one mod to DMKPRG to call DMKHSLIH if there are active
IFBLOK's.
There is an entry point DMKHSLRL which will release the IFBLOK
Chain - but havn't got round to sorting out who should call it yet
on things like logoff force etc....

Anne & Lynn Wheeler <lynn@garlic.com> writes:
from long ago and far away, I had done IPCS superset written in rexx
(when it was still called rex and hadn't been release as a product)
... which was initially line-mode CMS commands ... recent post
mentioning dumprx (and old email from 1982)
http://www.garlic.com/~lynn/2007.html#18 IBM sues maker of Intel-based Mainframe clones

and more old dumprx topic drift ... in the following, DUMPRX tended to
be distributed to unique individual per (internal corporate) location
... since it was primarily used by system support personal.

using rex(x) as implementation language for DUMPRX aided in making it
possible for other people to provide enhancements.

the following 8 (person) months estimate had at least 50percent
contingency in the estimate. while DUMPRX was used extensively inside
the company and more than justified any costs ... they were still
looking for package price to completely cover all costs that had ever
been associated with the effort.

re: dumprx; IBM Canada is currently putting together a "canned" VM
system that will have several features and be charged for. They have
requested that DUMPRX be included in the canned system. They have
requested total development resources for DUMPRX (for estimating
price) ... I've estimated a total, maximum effort of DUMPRX at less
than 8 months spread over the past 5 years ... including all
development, test, distribution, production support, and release to
release conversions ... that is for everybody, not just me, but
everyone that has contributed any changes, fixes, &/or enhancements to
DUMPRX. My direct distribution list for DUMPRX peaked at 130 people a
couple of years ago ... prior to its availability on VMTOOLS ... that
distribution list is now down to 108 people ... with an unknown number
of people obtaining DUMPRX from VMTOOLS.

recent post with old email from 1982 with some sequence of originally
creating dumprx, also mentioning dumprx was used as problem
determination supporting the 3090 service processor (pair of 4361
machines running customized version of vm370 release 6)
http://www.garlic.com/~lynn/2007.html#18 IBM sues maker of Intel-based Mainframe clones

re: ZORK; Barry Gold is not at MIT but at SHARE installation code RL
(i'll have to look it up sometime), sorry about the wild goose chase
around MIT. His statement on VMSHARE of 6/27/79, said he was going to
have 370 ZORK available shortly. I've contacted him via VMSHARE and he
has gotten caught up in a multitude of other VM activities. He is
working from the DEC user group's FORTRAN version but claims that he
had to get MIT approval to work on the source and would also require
MIT approval to allow other installations to work on it (while he is
busy with his other tasks). He isn't too hopeful since he has several
other communications with MIT group that have gone unanswered for a
long time and are still outstanding.

Interchange Fees: The tipping point
http://www.cyberwarzone.com/united-states-leaking-1tb-data-daily-foreign-countries

from above
Fed up with out-of-control interchange fees, retailers are fighting back
with concerted legal and educational tactics -- and, in some cases,
proactive offensives of their own.

... snip ...

http://www.epaynews.com/newsletter/epaynews322.html

from above:
Convenience store operators can make more money on a 12-ounce cup of
coffee than they can on a 12-gallon tank of gas. Credit card fees now
account for almost half of a typical store's expenses - more than labor.

Faster payments should not result in weaker authentication
http://www.securitypark.co.uk/article.asp?articleid=26294&CategoryID=1

from above:
The 11 faster payments member banks are progressing rapidly with their
implementation projects ahead of the November 2007 deadline. However,
as the systems being developed will enable a payment to be processed
in less than 15 seconds, there is no time to stop a payment, and
adequate authentication of the transactions becomes critical.

... snip ...

when we did the payment gateway as part of this stuff that came to be
called e-commerce ... we had some stats on how fast a transaction
turned around at the payment gateway; thru the payment network and
back (that was separate from any transit delays thru the Internet
between the webservers and the payment gateway) ... the avg. ran
between 200-300 milliseconds.

one of the early aads chip strawman objectives (in the 90s) was to be
able to meet the transit 200millisecond requirement in contactless
form factor and contactless power profile
http://www.garlic.com/~lynn/x959.html#aads

John.Mckown@ibm-main.lst (McKown, John) writes:
Our SAN boxes do this on Fibre Channel as well. Hum, I am not sure
about the multipathing. From some discussion on the z/Linux forum
about FCP (Fibre Channel) I think it is supported. In any case, the
"open" DASD are still cheaper per megabyte that the exact same boxes
which are Ficon (with ECKD emulation).

Somebody you apparently know called me yesterday and said he was
looking for background information on hippi, fcs, etc.

I believe he left the conversation with some misunderstandings. I
mentioned disk arrays, ipi3 over hippi, fcs, etc. I explained disk
arrays to him and parity drives in an 8-for-9 configuration
representing no single point of failure. He disagreed. He said that he
study the subject and the best correcting codes could do was on the
order of 3 correcting bits for 8 bits of data.

He misunderstanding is at several levels. First, the parity drive
isn't to detect errors. Each drive has significant embedded error
detection/correction capability. The parity drive handles the
situation where the drive is reported as bad. All the standard,
run-of-the-mill disk technology is used to indicate whether data
coming from a drive is good or bad (or even if you are getting any
data at all from the drive). It is only once that standard disk
technology identifies which block of data is bad ... then RAID
technology takes over and is able to use the parity drive to recreate
the original record (including the missing piece from the bad drive).
A 8-for-9 RAID configuration is able to withstand a single point of
failure and still reconstruct the data. In the worst case, a whole
drive fails (and standard disk technology identifies that the drive
has failed). Information from the parity drive and the remaining 7
data drives are able to reconstruct the missing piece of data.

The same-principle works even in a 16-for-17 configuration. The single
parity drive is able to reconstruct data from a single missing drvie.
However, as the number of drives increase in a RAID configuration, the
probability increases that there would be multiple simultaneous
failures. In the Thinking Machines "Data Vault", where they make use
of 32 parallel drives (32 drives spinning and having the appearance of
a single drive that has 32 times the capacity and 32 times the
transfer rate), there are actually 40 drives in the configuration.
Five drives are used as "ECC" drives, and three drives are "spare",
that are available for electronicly swap/switching into the
configuration (in the place of a defective drive). As part of a
spare-drive "swap-in", the data on the defective disk is recreated (in
the background) and written to the new substitute drive.

The other comment (after further discussion) was that the drive
failure is so improbable that it isn't worth doing. It is worth doing
in large number of cases. Assuming a decent data base configuration
(we had one such customer in today) that has 1.5 terabytes of data
spinning in a single machine room. In a RAID configuration, you can
take off the shelf Imprimus 1.5gbyte, 5.25in drives that spins at 5400
rpm and has well under 12millisecond avg. access. 4 of these drives
can go in a drawer and you can have seven drawers in a rack. This
provides for 28 drives, three "8-for-9" logical drives, with a single
spare drive. Effectively there are 36gbytes in a single rack.
Approximately 45 such racks (1000 data drvies + 250 parity drives =
1250 drives) provide 1.5 terabytes of data such racks. The base
quantity price for the Imprimus drive is around $4k, resulting in a
little over $5m for the Imprimus disk costs. Assuming a nominal
conservative 80k hours for drive MTBF & a uniform failure
distribution, then one would expect a drive failure (somewhere in the
complex) on the avg. of once every 60 hours. It actually isn't a
uniform failure distribution, so observed failure rate would be
somewhat better ... but still quite frequently.

Finally, I'm not sure that he has been keeping up with things like
various foward-error-correcting codes, like Reed-Solomon. For the
High-Speed Data Transport project (there are a number of papers on the
project that I wrote), we were dealing with Cyclotomics which is a
leading outside vendor in the area. 15/16ths Reed-Solomon encoding
(i.e. 15 data bits of 16 total bits) can provide six orders of
magnitude improvement in the bit error rate. In some of the media that
we were looking at with BER rates in the 10**-9 range, 15/16ths
Reed-Solomon encoding provides the appearance of 10**-15 BER. Simple
things like the whole compact disk market is using digital FEC
encoding.

in any case, you might like to talk to him ... and bring him a little
bit better up to-date on technology

glenn.miller writes:
My comment about the STK Log Structured Array ( LSA ) mechanism was
a comparison to all the other mainframe DASD vendors ( IBM, HDS &
EMC ). The STK LSA is a method of storing the data internally
within the DASD subsystem. As far as I know, no other mainframe
DASD vendor used ( the IBM RVA doesn't count, it was a 'newer' STK
ICEBERG with an IBM label on the cover of the machine ) the LSA
concept to internally store the data.

from above:
The source volumes within a storage group can span logical subsystems,
physical subsystems, or both.

With ESS FlashCopy Version 2, the source volumes can be in different
Logical Subsystems within the same Storage Subsystem.

Because the ESS is not a Log Structured Array (LSA) device, but rather
uses Home Area method, the use of multiple backup versions means that
physical space will be consumed by this approach. On the RVA, in
contrast, its LSA technology means that the backup versions only take
a small fraction of extra physical space.

... snip ...

for drift ... some old email with old mention of log structured
filesystem studies.

They've requested a meeting here in lsg the afternoon of the 14th.
There was some question would we see some of the nasa/ames at ieee mss
in monterey ... I'm not sure how much time we will be at the ieee
meeting since we have work to do as well as both UK social security
department and Sanwa back coming in next week.

Also the nasa/ames meeting the following week interfers with
attendance at sosp '91 in asilimor on the 13th-16th. It is too bad
that it isn't the 15th ... I would drive back up and stay for the MIPS
presentation. This means I'll effectively be commuting back&forth
between here and monterey some good part of next week and the week
after.

..... reference .....

Newsgroups: ibm.ibmunix.unixnews
Date: 4 Oct 91 17:30:26 GMT

... this is being cross posted with netwrkng.forum on chaste and
unixnews.forum on ibmunix
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

Note that AIX/EXPO was this week at the Santa Clara Convention Center.
We also held the HA/6000 class for IBM/US at the Sunnyvale AIX Porting
this week.

Because of conflicts over the HA/6000 presentation at AIX/EXPO and the
class, I've missed the ANSI X3S3.3 HSP and XTP/TAB meetings in Boulder
this week. During a disclosure that I gave to Nasa/Ames this week at
AIX/EXPO, Eugene was there. He asked me if I could bring some
interesting toys (for a conference) ... he also mentioned that he has
gotten a full-size skyboard VR system (board, feedback, wrap-around
image, etc).

There were a couple of vendor products that were of interest at
AIX/EXPO ... one especially for HA/6000 was a SCSI RAID3/RAID5
(configurable) product with dual-power supplies and two SCSI bus
interfaces (i.e. could be connected to two different SCSI buses ...
going to two different processors ... and as such doesn't get into the
power on/off glitches). Controller supports up to seven 4-for-5 banks
(i.e. seven logical RAID5 drives, ... 35 drives total). Claimed
performance for a 5 drive RAID5 configuration with 1024 byte sector
size and 4:1 read-to-write ratio is 425 I/Os per second.

Also note, next week Interop '91 is at the Santa Clara Convention
Center. Also, next week, the RISC/6000 shrink-wrap Unitree will be on
display at the IEEE MSS meeting in Monterey. As an aside, the XTP/TAB
members will be out in force during the 14th-17th in Minneapolis.

I've managed to be busy/miss every SOSP since we had that last
business meeting at SOSP/Asilomar and decided that it would be fairer
to the east coast students if the meetings were rotated between coasts

Looks like one could spend almost the whole fall going to interesting
activities. Unfortunately, we leave for Hong Kong on the 24 for the
HA/6000 class for APG ... and then on to Sydney the following week for
a week HA/6000 class for IBM Australia. I would like to hear from
anybody planning on attending Allison's pitch on the 24th.

John Mashey is vice president, systems technology at MIPS Computer
Systems, Inc., and he is responsible for long-range product planning
and technical direction for his company's marketing and sales. He is
also the featured speaker for the October 15 meeting of the Santa
Clara Valley Computer Society. He will talk about the MIPS R4000.

The presentation will look at recent progress in microprocessors,
focusing especially on the MIPS R4000. The R4000 is a superpipeline
1.3-million transistor chip that includes on-chip caches, floating
point, 64-bit integer unit, and control for two-level writeback caches
with support for many cache coherency protocols.

Mashey will describe the chip and especially the rationale for some of
the design choices. He will also touch on the ACE initiative and the
R4000's relationship to it. Samples of the R4000 will be available for
viewing.

Proior to joining MIPS in 1986, John Mashey was directory of software
engineering for the data systems division of Convergent Technologies.
Before that he spent 10 years with Bell Labs where he was a key
contributor to the design and development of the Programmers Workbench
version of UNIX. Mashey has been an ACM National Lecturer and is a
leading speaker on UNIX and RISC. He earned his PhD in computer
science from Penn State.

They've requested a meeting here in lsg the afternoon of the 14th.
There was some question would we see some of the nasa/ames at ieee mss
in monterey ... I'm not sure how much time we will be at the ieee
meeting since we have work to do as well as both UK social security
department and Sanwa back coming in next week.

Also the nasa/ames meeting the following week interfers with
attendance at sosp '91 in asilimor on the 13th-16th. It is too bad
that it isn't the 15th ... I would drive back up and stay for the MIPS
presentation. This means I'll effectively be commuting back&forth
between here and monterey some good part of next week and the week
after.

XXXXXX and I wish to talk with you about Medusa. We are working
on massively parallel processing, and what IBM approach may be
suitable for DARPA. I was unable to get a message left with your
phone mail, so this note will alert you to our interest. We will
attempt to call you at 8:30 your time.

For software I think we need to take a phased approach ranging from
1) The first phase would then be the minimum that a Alpha test
customer would require to be successful (someone like LLNL or
Sandia)
N) The last phase would be the blown 'everything you ever wanted'
in Medusa software support

I believe this would let us concentrate on the first phase.

We also need to develop a software test plan for phase 1

I believe it would be VERY good if we could include this as part of
the 10/10 review.

and posting with old email from just prior to being told that the
effort was being transferred and we weren't suppose to work on
anything with more than four processors:
http://www.garlic.com/~lynn/2006x.html#3 Why so little parallelism?

I (basically) have a internet shadow via ftp/anonymous on "wheeler"
.... previously wheeler.austin and now wheeler.lsg. It includes most
of the gnu stuff, psk bit maps, most of the project athena stuff
... along with lots of other things from around the internet.

For instance, Austin got both Kerberos and gnumake from "wheeler".
Also, apparently aix/370 development group has just recently
discovered Kerberos ... and because of the way (at least some of the)
aixnet routers are set-up ... austin subnets can access wheeler.lsg,
but the aix/370 group's subnets (ip-addresses) can't get at
wheeler.lsg (our security is so good, that even ip subnets that we
would like to have access, can't get thru).

I finally had to tar/compress all the kerberos stuff and binary/upload
to VM and then send it off to aix/370 group.

Just another example of mainframe costs

Bob.Richards@SUNTRUST.COM (Richards.Bob) writes:
That is exactly WHY the z9 can be cost effective in the right
circumstances using Linux under z/VM. Heck, even z/OS competes well
today against WebSphere/UDB under AIX with the advent of zAAPs and
zIIPs.

... various news items (this week) on the subject has somewhat been
all over the place

SSL info

Ertugrul Soeylemez <never@drwxr-xr-x.org> writes:
Besides the fact that a certificate contains a bit more information,
what are the privacy implications? Unless the certificate represents
something like an electronic form of your passport, you decide, what
goes in there. When the CA decides to sign it, then they do so.
Otherwise you're free to go elsewhere.

Now to the security part: A public key, as its name states, is made
publicly available. If that does reduce security, then what's the point
in public key cryptography? The authenticator really just needs the
public key to verify authenticity. A certificate is nothing more than
an encapsulated public key, together with some informations about its
holder, and one or more signatures (at least from a CA in the proper
case).

so you have a client that generates a public/private key pair. the
client registers the public key with the server/certification
authority ... the server/ca registers the public key in the server/ca
database ... then the server/ca generates a digital certificate
containing the public key and gives a copy of the digital certificate
to the client..

now in an authentication operation, the client digital signs
something, appends the digital certificate and transmits the digital
signature and digital certificate to the server/ca ... who already has
a copy of the client's public key on-file.

since the server/ca already has a copy of the client's public
key (as part of the registration operation) ... and, in fact,
the server/ca probably even recorded the original of the client's
digital certificate. that means the server/ca not only has the
client's public key as well the client's digital certificate.

requiring the client to return a copy of the digital certificate to
the ca/server on each digital signature operation is redundant and
superfluous ... when the ca/server already has copy of the client's
public key and typically also has the client's original digital
certificate (after having sent a copy of the client's digital
certificate to the client).

the ca/server would also run much more efficiently if they just used
the onfile client's public key that they already have to verify the
client's digital signature ... rather than having to go thru the
repeated extraneous gorp of verifying the (appended transmitted)
client digital certificate along with all the related digital
certificate encoding/decoding magic.

as mentioned before ... one of the reasons for the retrenching from
the early 90s x.509 identity digital certificates to the
relying-party-only digital certificates in the mid-90s ... was
eliminating all the extraneous personal information. It isn't so much
the publication of public key that was the issue ... it was spraying
personal information all over the place every time the digital
certificate was transmitted.

however, it is straight-forward to demonstrate that it is much more
efficient and drastically simpler for the relying party to directly
retrieve the public key from an online record, eliminating all of the
client digital certificate gorp ... i.e. certificate-less public key
http://www.garlic.com/~lynn/subpubkey.html#certless

2) typically password scenarios require a unique value for every
different security domain. the problem is that the same value is
used for both origination and verification. unique passwords
are countermeasure for scenarios where one domain can attack
another (i.e. local garage isp and your online banking or
your place of business).

there is huge advantage of using public keys (and digital signatures)
for authentication ... compared to pins/passwords. this is true
regardless of whether it is a certificate or certificate-less paradigm.

however, sometimes there is misconception that public keys
and digital certificates are equivalent.

digital certificates are mechanism for trusted information
distribution for the offline environment ... somewhat the electronic
version of letters of introduction/credit from the sailing ship days
... when two entities were strangers and had no prior relations
.... and the relying party(s) had no way of directly contacting any
certifying party.

However when it becomes abundantly evident that the offline paradigm
digital certificates are redundant and superfluous in an online world
... and/or between entities that have established relationship ...
then it doesn't have to follow that there is no advantage to having
public key infrastructure (without digital certificates).

while hippi may have been more expensive than scsi ... for scale-up
with various kinds of raid .... really large aggregation of
(inexpensive) raid disks could be handled by hippi ... and the disk
prices (price/mbyte) would dominate.

part of the issue with some of the more complex file management
infrastructure is also the scaleup when you have hundreds and/or
possibly thousands of drives. keeping track of what was actually
needed on disk. unitree and some of the others ... referred to here
http://www.garlic.com/~lynn/2006v.html#10 What's a mainframe?

sort of started at the high-end having to deal with such management
issues.

I had started with somewhat similar position when i originally
implemented CMSBACK in the late 70s ... starting out as purely
backup/archive operation ... not integrated into filesystem workings.
However, CMSBACK went thru quite a bit of evolution ... workstation
datasave facility, ADSM, and now the current TSM (tivolli storage
management)
http://www.garlic.com/~lynn/2006v.html#24 Z/Os Storage Mgmt products

for other drift, one of the things i (vaguely?) remember being told
about the difference between 360/195 and 370/195 ... was some amount
of instruction retry had been added to 370/195. the claim was that
there were so many parts in the 195 ... there were possibility of a
couple hardware events a week that would be handled by instruction
retry.

this was when the 370/195 guys were talking to us about doing software
support for a dual-istream 195. the 195 pipeline supported something
like 10mip when there was branch looping within the pipeline ... but
otherwise branches would stall the machine ... and normal codes ran
closer to 5mips. a dual-isteam (somewhat like current genre of
hardware multi-threading) had chance of keepting pipeline full (since
there would be two streams nominally running at 5mips). this never
shipped to customers.

and i had to address similar problem when i rewrote the i/o
supervisor for the disk engineering and product test labs
to make it absolutely bullet-proof ... recent reference
http://www.garlic.com/~lynn/2007.html#2 "The Elements of Programming Style"

jmfbahciv writes:
<shrug> If you got 1000 parts that can break, your software
has to be a lot more paranoid about a lot more specific things.

there have been past threads before about why wasn't aix/370 (and some
of the other 370 oriented unixs) and (amdahl's) UTS normally run on
the bare iron as opposed to normal operation under VM.

one of the big issues was that there was significant RAS code in VM
... and the cost of adding similar RAS code to any unix port was
significantly larger than the total cost of just doing a
straight-forward port of unix to 370.

Field Service even took the position that they would not service a
machine that didn't have the necessary RAS and EREP software (i.e. VM
RAS and EREP would handle a lot of the error diagnostic, recovery, and
recording ... transparent to the virtual machine operation).

the exception that prooves the point was the tss/370 ssup that saw
extensive deployment inside at&t. higher level parts of Unix were
mated to the low-level tss/370 kernel interfaces ... unix was sort of
running on a "370 bare machine" ... but it was actually layered on top
of the lower level tss/370 kernel (which provided all the 370 RAS and
EREP support).

the 3090 had been designed so that there was a certain kind of error
that was only suppose to have 3-5 aggregate occurances for the year
(aggregate total across all customer machines installed for a year,
not just aggregate per customer). the problem was that there was
published that there had been closer to 20 such aggregate errors. I
asked the vendors in the audience how many had facilities where they
knew the number of all errors for all installed machines, and if they
were even accumulating such information ... how many vendors had the
information readily publicly available?

rfochtman@YNC.NET (Rick Fochtman) writes:
No argument here. But the hardware evolution in terms of RAS has made
the MF KING in this area. MF had a poor record to start, but it's
improved immeasurably over the last 40+ years, with the evolution of
microcode and things like automatic recovery and fail-over to
unaffected components, sparing, etc. The "squatty boxen" will
eventually arrive at the same point in their evolution, but they
haven't got there yet. In my own experience, the hard drives in use
today have a long way to go. They've come a long way, compared to
their abysmal past, but there's still a lot of room for
improvement. And I have a MAJOR problem with a software vendor who
tells me to "Reboot and see if that fixes it" or "Re-install and see
if that fixes it". Diagnostic tools and procedures are a world apart
between z/OS and the various Intel-based systems.

besides many of the 370 UNIX ports running under VM in virtual
machines ... in large part because of the significant costs to add
RAS/EREP support to Unix (significantly larger than cost of doing a
unix port to 370) ... there was also periodic concern regarding just
maintaining RAS/EREP support in the standard 370 operating systems
... one example
http://www.garlic.com/~lynn/2007.html#2 "The Elements of Programming Style"
that includes this old email reference
http://www.garlic.com/~lynn/2007.html#email801015

at one point there was a study of the "four" standard 370 operating
systems and the corporation was going to cut back to fewer ... just
based on the cost of maintaining the RAS/EREP software in all the
different operating systems.

another is story about 3090 service processor. Field Service had a
defined process for being able to diagnose and service customer
machines in the field ... that started with being able to "scope"
components to diagnose failed elements.

the 3081 had lots of components which were no longer directly
scopeable ... which gave rise to the "service processor". The service
processor had extensive probes into the 3081 components for diagnosing
errors and failed components. While the 3081 wasn't directly
scopeable, there was a bootstrape "scope" diagnostic process that
started with scoping the service processor ... and then using the
service processor to diagnose the 3081.

which started out as a 4331 ... which was scopeable. Before the 3090
shipped, the service processor strategy was upgraded to a pair of
(vm370-based) 4361s. Instead of having a bootstrapable scoping field
service process ... 3090 went to a pair of redundant 4361s.

part of mainframe availability strategy was based on loosedly-coupled
configurations. long ago and far away ... my wife had been con'ed into
going to POK to be responsible for loosely-coupled architecture. While
she was there she developed peer-coupled shared data architecture
http://www.garlic.com/~lynn/submain.html#shareddata

however, except for IMS hot-standby, there wasn't a lot of uptake
until sysplex (somewhat as a result, she didn't stay long in the job).

How many 36-bit Unix ports in the old days?

CBFalconer <cbfalconer@yahoo.com> writes:
In the old days we designed the fundamental gates (with
transistors, diodes, resistors and power supplies) to work under
worst case component tolerances, and the fan-in/fan-out was
specified accordingly.

somewhat related, being able to scope & diagnose individual components
... and the transition to no longer being able to scope individual
components ... recent cross-over post from bit.listserv.ibm-main
http://www.garlic.com/~lynn/2007.html#39 Just another example of mainframe costs

recent post with short discussion about why X.509 identity
certificates (from the early 90s) morphed into relying-party-only
certificates by the mid-90s (and possibly why something similar could
happen to passports)
http://www.garlic.com/~lynn/2007.html#17 SSL info

and this discussion about rather than attempting to absolutely
eliminate possibility of all data breaches and security breaches (even
those involving insiders) ... eliminate the attackers being able to
use (majority of) the information for fraudulent purposes
http://www.garlic.com/~lynn/2007.html#5 Securing financial transactions a high priority for 2007
and somewhat related post here
http://www.garlic.com/~lynn/aadsm26.htm#18 SSL (https, really) acclerators for Linux/Apache?

SSH protocol analyzer

Fred C Dobbs <fredcdobbs@nymialias.net> writes:
The original TCP/IP reference model consisted of four layers,
but has evolved into a five layer model.

The OSI model describes a fixed, seven layer stack for networking
protocols. Comparisons between the OSI model and TCP/IP can give
further insight into the significance of the components of the IP
suite, but can also cause confusion, since the definition of the
layers are slightly different.

2) HSP went directly from transport to LAN MAC interface ... bypassing
the transport/network interface ... which also violated the OSI model

3) HSP supported internetworking. internetworking (i.e. IP in TCP/IP)
is a non-existant OSI layer that sort of sits between the bottom of
transport (layer 4 in OSI) and the top of network (layer 3 in
OSI). Since IP doesn't exist in OSI, protocol that supports
IP/internetworking also violates OSI.

as always in the RFC summaries, clicking on the ".txt=nnnn" field,
retrieves the actual RFC.

from above:
It was clear from the start of this research on other networks that
the base host-to-host protocol used in the ARPANET was inadequate for
use in these networks. In 1973 work was initiated on a host-to-host
protocol for use across all these networks. The result of this long
effort is the Internet Protocol (IP) and the Transmission Control
Protocol (TCP).

... snip ...

now, one of the issues/inadequacies was being able to internetworking
layer that allowed lots of different networks to interoperate.

now, one of the interesting things that happened in the late 80s was
the US federal gov. mandate (similar positions could be found by
numerous other govs) to eliminate the internet and replace it with OSI
model, ISO standards protocol ... even tho all the work leading up to
internetworking in the 70s and early 80s showed that OSI-model was not
adequate for large heterogeneous operations (i.e. not so much the
technical interoperability ... but that totally different
organizations and business operations could interoperate). OSI model
was much more of a single organization, flat (networking) service
model, typical of the telephone companies and PTTs of the 60s and 70s.

Anne & Lynn Wheeler <lynn@garlic.com> writes:
I have an old t-shirt that some people at Amdahl were distributing
when vm/sp initially came out; it has a particularly gory looking
vulture labeled "vm/sp" and a line underneath that says "vm/sp is
waiting for you".

there was another 370 operating system that also needed extensive
RAS/EREP support, which was TPF ... recent post mentioning mainframe
RAS/EREP topic
http://www.garlic.com/~lynn/2007.html#39 Just another example of mainframe costs

TPF had been called ACP (airline control program) but appeared to get
renamed to transaction processing facility when they found large
financial networks using it for high-end transaction processing.

the original vm370 multiprocessor support from the 70s was somewhat
based on the assumption that there was enuf work to keep the
processors busy. lots of past posts mentioning multiprocessor support
and/or compare&swap instruction
http://www.garlic.com/~lynn/subtopic.html#smp

however, there were special changes in vm/sp1 that significantly
increased the multiprocessor overhead for nearly all customers in
order to pick up some benefit for a small set of customers interested
in running TPF (as a single processor guest, since TPF didn't have
multiprocessing support) on 3081s (with little no other
workload). 3081 was originally introduced with no plans for a single
processor machine (later they did introduce the single processor 3083
... in large part based on TPF customer requirement).

the vm370 issue was that normally, (TPF) virtual machine execution was
serialized with vm370 kernel emulation of various privilege
instructions. the big vm/sp1 change for single guest 3081/TPF, was
somewhat around virtual machine handling for the SIOF (start i/o fast)
instrucation.

SIOF (instruction with 370s) instruction architecture allowed for some
amount of overlapped processing with subsequent processor instruction
execution. As a result, vm370 could theoretically (also) carry out
SIOF instruction emulation overlapped with virtual machine execution
(if there was at least two processors available). In order to enable
this scenario (in large part because in the single guest TPF/3081
scenario the 2nd processor was always idle), a lot of signal processor
instructions were introduced into the vm370 kernel (for interprocessor
kernel signaling) causing a lot of kernel signal processor interrupts
and a whole lot of (additional) multiprocessor lock operations. This
was done for the general case, on the off chance it might provide for
concurrent two-processor utilization in the single guest TPF/3081 SIOF
scenario. However, it significantly drove up general kernel overhead
for processing all the SIGP interrupts and managing all the additional
kernel locking operations (for all cases).

In the following, the "I/O reliability" clean-up/rewrite ... made
nearly all of the I/O code smaller, faster, cleaner and more straight
forward ... not just alternate path support.

re: alternate path; as part of the I/O reliability clean-up ... circa
80 ... I completely rewrote the alternate path code (for both UP and
MP) ... it is smaller, faster, cleaner and more straight forward to
understand than the existing alternate path support. It also maintains
more information about path use activity and gives a clearer picture
about what is going on in the system.

Changes in SP1 aggravated a lot of lock spin, cpu-to-cpu SIGP chatter,
and VMBLOK dispatch allocation by processor. Unfortunately most of the
activity to address some of the problems introduced by SP1 (&/or
generic MP problems) have been confined to HPO (on the other hand HPO
has been doing a lot of things like attempting to tune free storage
allocation to 3081 cross cache characteristics).

There appears to have been little or no base-SP activity in the MP
performance area.

The SP vis-a-vis HPO multiprocessor issue was that HPO was an added
price option which was somewhat justified for the high-end "POK"
machines. The mid-range "Endicott" machines were starting to ship more
and more multiprocessors ... but the base multiprocessor support had
significant extra overhead introduced with VM/SP1 ... which was only
getting fixed in the extra cost HPO kernel addon.

The comment about system locks that i was making was comparing VM/SP
with VM REL 6 there are vastly more locks obtained in SP than there
are REL 6 we've been running CP REL6 for 9 months or so. I dont
understand why they decided to put lots more lock requests in SP

re: lock message; recent observation by a internal location undergoing
conversion from release 6 to VM/SP is that VM/SP is taking at least
6-8% additional CPU overhead because of the new locking features of
VM/SP.

However, VM/SP issues weren't simply limited to changes for
multiprocessor operation ... but also degraded single processor
operations ... as implied in old t-shirt reference that was
being distributed by people at Amdahl
http://www.garlic.com/~lynn/2007.html#11 vm/sp1

re: sp/MP performance; observed SP/MP performance degradation compared
to release 6 is consistant with measurements at a large number of
installations (almost everyone I have talked to), customer & internal,
UP, AP, & MP. This is even with subpool fix plus several enhancements
to SP (which would have also improved release 6 performance), SP still
runs slower than release 6. Every severity one performance situation
that I've been contacted by people in the field since SP came out has
been a release 6 to VM/SP conversion problem (i.e. problem appeared at
the time the installation converted to SP). Also, while UP performance
is degraded, AP/MP performance is more severely degraded. Implication
is that there may be multiple problems causing degraded SP performance
(as compared to release 6) ... some of the problem(s) are general (UP,
AP, & MP), others are uniquely multiprocessor problems.

This most recent data point is completely consistant with all others
that I'm aware of.

edgould1948@COMCAST.NET (Ed Gould) writes:
Never heard of it so I guess it, that doesn't mean it never existed
just an extremely small audience. I don't recall ever hearing
anything about VF . I would expect if it were popular that there
would be a current model, no?

I am guessing that VF stands for vector facility (?). I don't recall
seeing PTF's for it. So either it was extremely small install base or
the code IBM didn't have bugs?

I used the scan PTF cover letters and don't recall any mention of
it. My memory is far from perfect, but chance are I would have read
(or heard from a GUIDE/SHARE presentation of it).

supposedly one of the benefits of vectors is to increase the rate the
floating point unit(s) is feed to keep it constantly busy ... not
stalling waiting for memory fetches or other delays.

claims have been that various kinds of optimization in instruction
fetching and other optimization can improve scalar floating point
... such that floating point unit can be kept busy w/o requiring
vectors.

bugs would have tended to be related with program loading/use of the
vector registers, mostly fortran ... although there may have been a
performance optimization in virtual machine support about whether a
virtual machine was enabled for vector facility or not ... and whether
vector registers had to be saved and/or restored.

Regarding vector: For the past two months we have been working on (and
have now completed) the Objectives for VM/XA SP Release 1. (Terminology
update: VM/XA Migration Aid releases 3 and 4 have become VM/XA Systems
Facility releases 1 and 2; the next thing after that is VM/XA SP
Release 1.)

VM/XA SP Release 1 is currently planning to support vector. XXXXXX
owns the design of the vector line item and I own the design of the
dispatcher. Dispatcher will be changed to:

(1) Have some flavor of a true runlist to reduce overhead.
(Though part of the overhead that yyyyyy sees here will
tend to disappear naturally when other problems in the system
are fixed... such as avoiding putting excessive users in
the dispatch list.)
(2) Have a "soft affinity" capability. Users should tend to get
dispatched on the same processor on successive dispatches to
reduce cache interference.
(3) Have a "hard affinity" capability. This is so a user can be
assigned to always run on the same processor or subset of
processors. For example, when the user needs vector and not
all processors in the system have it.

The question quickly comes up, how will vector be supported? Since it
could be costly to dispatch someone using vector, do you allow
everyone to use it, or just authorized users? And do you allow just
one vector user into the dispatch list at a time, or do you allow
everyone in? We are currently leaning strongly toward letting anyone
use it (no authorization), and letting everyone into the dispatch list
at once. There are two reasons why it seems we will be able to get
away with that: (1) With the existing VM/XA design it seems that we
will be able to bill the extra dispatching overhead to the guilty
user... that is, it will be stolen from his problem state time, and
will not be felt by other users. (2) I am told by XXXXXX that the
hardware keeps reference (and/or) change status on the various vector
registers, therefore when a vector user is dispatched repeatedly (as
when he is swapping in his working set) the vector overhead won't be
so bad because we only have to save/restore the part that was used.
At least that is the theory... I'm not knowledgeable about vector, I
get all my information from XXXXXX. A while back though XXXXXXX said
the hardware may initially not have all the reference/change stuff
implemented... so maybe our initial support will be more limited than
I've indicated.

Brian Inglis <Brian.Inglis@SystematicSW.Invalid> writes:
Under VM/CP SET AFFINITY ON 1 only allowed the virtual machine to
execute on CPU 1, so it got harder to schedule that virtual machine to
the deadline as the system got more loaded, although could reduce TLB
misses and cache invalidations if the system had any idle time.

Looking through the lead article in the 3/14 issue of ComputerWorld
there is a clarification of the recent DEC announcement of the 4 way
MP and DEC's support of Symmetric Multiprocessing.

Basically the message is that the operating system shipped with the
new VAX machines DOES support symmetric multiprocessing. Apparently,
the operating system is a pre-release version of VMS 5.0 or perhaps an
older version with the VMS 5.0 symmetric processing support added.

Since their machines have been APs in the past, we have not included
the multiprocessor versions of their machines as competitors in a
Commercial benchmark. Now it would appear that they have changed the
game by supporting symmetric multiprocessing. I suspect that we will
now have to look at their MPs as well as their UPs in positioning them
in the commercial benchmark.

In Computerworld, 3/21/88, page 1, "DEC Stalks Big Game with Symmetrical
VMS", there is the following sentence. "The full release of VMS release
5, with added online transaction processing, hooks to a new database and
other enhancements, is due out later this spring". This is the first
I've heard of a new database. Does anyone heard similar
rumors about a new database?