IBM zEnterprise

shmuel+ibm-main@PATRIOT.NET (Shmuel Metz , Seymour J.) writes:
ITYM when did memory become storage. Certainly the use of "memory"
predates the S/360.

there was big deal with announcement for 370 virtual memory ... about
having to change all "virtual memory" references to "virtual storage"
references ... and resulting DOS/VS, VS1, VS2, etc. vague fading memory
was the excuse given had something to do with patents or copyright.

in the above article ... there is some amount of FUD claimed with regard
to mention of M44/44X ... some claiming that it is little more than
claiming os/360 supervisor services (SVC interface) provides an abstract
virtual environment.

--
virtualization experience starting Jan1968, online at home since Mar1970

E4 channel command (extended sense) was introduced to start providing
device characteristics (theoritically starting to minimize the amount of
device information that had to be provided in system/IO sysgens).

Recently there has been some issues/nits related to FBA ... regarding
efforts to migrate from 512 byte to 4096 byte sectors (which might be
the closest analogy to all the CKD transitions) ... aka things like

a lot of the discussion (aligned & multiples) in the above is almost
identical to issues I faced when I did the page-mapped enhancements to
cp67/cms filesystem (nearly 40yrs ago) ... some past posts
http://www.garlic.com/~lynn/submain.html#mmap

--
virtualization experience starting Jan1968, online at home since Mar1970

One of the other issues in the current payment paradigm ... with or
without certificates ... the end-user as relying party, is frequently
not in control of the risks & security measures related to their
assets (fraudulent transactions against their accounts).

This shows up with what kind of fraud gets publicity (at least before
the cal. state breach notification legislation) ... namely the kind
that consumer has some control over ... lost/stolen cards ... and/or
recognizing "add-on" ATM cash machine skimmers. There was almost no
publicity about breaches and/or instances were skimmers were installed
in machines at point of manufacture ... since about the only
corrective action that consumers would have (in such cases), was to
stop using the card altogether.

I was one of the co-authors for the financial industry X9.99 privacy
standard ... and one of the most difficult concepts to get across was
that the institution wasn't providing security for protecting the
institutions' assets ... but providing security to protect assets of
other entities (it required rethink by security departments about what
was being protecting from whom ... in some cases it even required the
institution to protect consumer assets from the institution itself).
http://www.garlic.com/~lynn/subpubkey.html#privacy

Several of the participants were also heavily involved in privacy
issues and had done in-depth, detailed consumer/public surveys
... where the number one issue came up as "identity theft"
... primarily the form involving fraudulent financial transactions
("account fraud") from information harvested in breaches. There seemed
to be little or no activity in correcting problems related to breaches
... so they appeared to think that the data breach notifications might
prompt corrective action (aka ... the crooks would perform fraudulent
financial transactions with institutions other than the one that had
the data breach ... if nothing else to put minimize LEOs determining
the source of the information). As a result ... institutions having
breaches experienced very little downside and any correcti ve action
was pure cost w/o any direct benefit to the institution (at least
prior to data breach notification).
http://www.garlic.com/~lynn/subintegrity.html#harvest

Part of the paradigm changes around x9.59 financial transaction
standard, minimized the institutions (that had little direct interest
in protecting your information) from having to protect your
information. Besides security proportional to risk and
parameterised risk management ... this also has the concept that the
parties at risk, have increased control over the actual protection
mechanisms (a security failure mode is trying to mandate for parties,
with little or no vested interest/risk, be responsible for the
security measures).
http://www.garlic.com/~lynn/x959.html#x959

There is an analogy scenario in the recent financial mess
... involving environment where institutional parties were motivated
to do the wrong thing. Congressional testimony pointed out that it is
much more effective to change business process environment where the
parties have vested interest to do the right thing ... as opposed to
all the regulations in the world ... attempting to manage an
environment where the parties have a vested interest to do the wrong
thing.

--
virtualization experience starting Jan1968, online at home since Mar1970

Part of the original SSL security was predicated on the user
understanding the relationship between the webserver they thought they
were talking to, and the corresponding URL. They would enter that URL
into the browser ... and the browser would then establish that the URL
corresponded to the webserver being talked to (both parts were
required in order to create an environment where the webserver you
thot you were talking to, was, in fact, the webserver you were
actually talking to). This requirement was almost immediately violated
when merchant servers found that using SSL for the whole operation
cost them 90-95% of their thruput. As a result, the merchants dropped
back to just using SSL for the payment part and having a user click on
a check-out/payment button. The (potentially unvalidated, counterfeit)
webserver now provides the URL ... and SSL has been reduced to just
validating that the URL corresponds to the webserver being talked to
(or validating that the webserver being talked to, is the webserver
that it claims to be; i.e. NOT validating that the webserver is the
one you think you are talking to).
http://www.garlic.com/~lynn/subpubkey.html#sslcerts

Now, the backend of the SSL payment process was SSL connection between
the webserver and a "payment gateway" (sat on the internet and acted
as gateway to the payment networks). Moderate to heavy load,
avg. transaction elapsed time (at payment gateway, thru payment
network) round-trip was under 1/3rd of second. Avg. roundtrip at
merchant servers could be a little over 1/3rd of second (depending on
internet connection between the webserver and the payment gateway).
http://www.garlic.com/~lynn/subnetwork.html#gateway

I've referenced before doing BSAFE benchmarks for the PKI/certificate
bloated payment specification ... and using a speeded up BSAFE library
... the people involved in the bloated payment specification claimed
the benchmark numbers were 100 times too slow (apparently believing
that standard BSAFE library at the time ran nearly 1000 times faster
than it actually did).

Merchants that found using SSL for the whole consumer interaction
would have required ten to twenty times the number of computers ... to
handle equivalent non-SSL load ... were potentially being faced with
needing hundreds of additional computers to handle just the BSAFE
computational load (for the mentioned extremely PKI/certificate
bloated payment specification) ... and still wouldn't be able to
perform the transaction anywhere close to the elapsed time of the
implementation being used with SSL.
http://www.garlic.com/~lynn/subpubkey.html#bloat

--
virtualization experience starting Jan1968, online at home since Mar1970

the thing that even made the 3390 statement possible was the existance
of the whole additional complexity of the CKD virtualization layer on
top of FBA devices; aka there is no longer any direct relationship
between what the system thinks of as DASD CKD geometry and what actually
exists (something that the underlying FBA paradigm had done for the rest
of the industry back in the 70s).

as I've commented before ... I was told that providing fully
integrated and tested MVS FBA support, it would still cost an
additional $26M for education, training, publications to ship ... and
the business justification had to show incremental profit that more
than covered that $26M; aka on the order of $300M or so in additional
DASD sales directly attributable to FBA support (then the claim was
that customers were buying DASD as fast as it could be built ... and
the only thing that FBA support would do, was switch same amount of
disk sales from CKD to FBA). It was not allowed to use life-cycle
cost savings ... either internal cost savings and/or customer cost
savings as business justification for shipping FBA support.

--
virtualization experience starting Jan1968, online at home since Mar1970

Kees.Vernooij@KLM.COM (Vernooij, CP - SPLXM) writes:
If inventing a good name is one thing, reusing it is apparently still
better. I know at least 3 IBM products/features that were/are called
Hydra. Apparently this is a 'monster'ly well working term.

in the same time frame as the "virtual memory" to "virtual storage"
change ... there was also work on online computing for DOS/VS and VS1
that was to be called "personal computing option" (PCO; aka sort of
entry version of TSO).

they viewed the work on morph of cp67/cms to vm370/cms as competition.
some part of the PCO group had written a "simulator" and would
frequently publish thruput benchmarks ... that the vm370/cms group was
then required to do "real" benchmarks ... showing equivalent operation
(although doing real benchmarks consumed significant percentage of the
total development group resources compared to the trivial effort
required of the PCO resources). The vm370/cms benchmarks were sometimes
better and sometimes worse than the PCO simulated numbers. However, when
PCO was finally operational, it turned out that their real thruput
numbers was something like 1/10th of what had been claimed for the
simulated numbers.

later, people doing above articles turn out an IBM (science center)
product called vs/repack ... which did semi-automated program
reorganization for paged, virtual memory environment. Its program
analysis was also used for things like "hot-spot" identification. The
program had been used extensively by other product groups (like IMS) as
part of improving performance in virtual memory environment.

i distincly remember, in the runup to virtual memory being announced for
370, corporate pushing hard line about all "virtual memory" references
being change to "virtual storage" (although the memory is fading about the
reason for such change).

I had done a lot with virtual memory and paging algorithms (for cp67) as
undergraduate in the 60s ... which was incorporated in various products
over the years. Later at Dec81 ACM SIGOPS conference ... I was
approached by former colleague about helping somebody get their PHD at
Stanford (on virtual memory algorithms). There had been other academic
work in the 60s on virtual memory algorithms ... which was nearly
opposite of what I had done (and the subject of the '81 Stanford PHD
work) ... and there was strong opposition from those academic quarters
on awarding the PHD. For some reason there was management opposition to
my providing supporting information for the Stanford PHD, that delayed
my being able to respond for nearly a year (regarding my 60s
undergraduate work) ... copy of part of the old response
http://www.garlic.com/~lynn/2006w.html#email821019

in the payment process ... the transaction information (involved in
majority of data breach news because of the fraudulent transaction
financial motivation) at risk at the merchants and/or transaction
processor ... isn't their information at risk ... it is the public's
information at risk. w/o various regulation there is little vested
interest for those parties to protect assets that don't belong to
them.

the analogy in the recent financial mess were unregulated loan
originators being able to package loans (w/o regard to loan quality or
borrowers' qualifications) into toxic CDOs and pay rating agencies for
triple-A ratings (when both the CDO sellers and the rating agencies
knew that the toxic CDOs weren't worth the triple-A rating; from
fall2008 congressional testimony). The loan originators are the
responsible parties for the loans ... but being able to unload every
loan (as triple-A rated toxic CDO) eliminated all their risk (some
proposals have been floated that loan originators have to retain some
financial interest in the loans they originate).

as mentioned, x9.59 retail payment financial standard took a different
approach ... by slightly tweaking the paradigm ... and eliminated the
risk associated with those data breaches ... and therefor dependencies
on parties that have no direct motivation in protecting the associated
information (didn't try and use regulation and other means to force
protection of assets at risk by parties that have no interest ... but
eliminated the risk associated with those assets ... and therefor any
requirement to force parties w/o direct vested interest, to provide
security/protection).

--
virtualization experience starting Jan1968, online at home since Mar1970

GSM eavesdropping

On 8/2/2010 4:19 PM, Paul Wouters wrote:
"The default mode for any internet communication is encrypted"

the major use of SSL in the world today is hiding financial
transaction information because the current paradigm is extremely
vulnerable to form of replay-attack ... aka using information from
previous transactions to perform fraudulent financial transactions.

One of the things done by the x9a10 financial standard working group
was slightly tweak the paradigm with the x9.59 financial standard
... eliminating the replay-attack vulnerabilities .... and therefor
the requirement to hide financial transaction details (as a
countermeasure) ... and also eliminates the major requirement for SSL
in the world today. The side-effect of x9.59 paradigm tweak is it
eliminated the replay-attack vulnerability of transaction information
regardless of where it exists (in flight on the internet, at rest in
repositories, whereever)

where x9.59 can be viewed as using strong Authentication and Integrity
in lieu of privacy/confidentiality (required in current paradigm to
hide information as countermeasure to replay-attack vulnerabilities)

--
virtualization experience starting Jan1968, online at home since Mar1970

In the parallels with the 20s stock market speculation ... there are
large numbers purely speculating in real estate ... involving
non-owner occupied homes (non-owner occupied speculation with lots of
flipping and turn-over churn contributed significantly to
inflation/bubble runup) . Owner occupied homes are more analogous to
some amount of the public involved in the 20s stock market bubble as
collateral damage.

Major fuel for the whole thing was unregulated loan originators being
able to pay the rating agencies for triple-A ratings on their
(mortgage backed) toxic CDOs (roughly the role played by Brokers'
loans in the 20s ... previous post referencing Pecora hearings). Toxic
CDOs had been used in the S&L crisis w/o the effect as this time
... since w/o the triple-A rating ... there was limited market. Being
able to pay for triple-A ratings, allowed the triple-A rated toxic
CDOs to be sold to all the institutions and operations (including
retirement and pension plans) that had mandates to only deal in
triple-A rated "SAFE" instruments (as long as the bubble lasted,
providing nearly unlimited funds for the unregulated loan
originators).

There was also heavy fees & commissions all along the way
... providing significant financial motivation for individuals to play
(and more than enough motivation to offset any possible concern the
individuals might have regarding the risk to their institution, the
economy, and/or the country).

Non-owner occupied speculation might see 2000% ROI (given inflation
rates in some parts of the country, inflation further fueled by the
speculation) using no-documentation, no-down, 1% interest-only payment
ARMs ... flipping before the rates adjusted

--
virtualization experience starting Jan1968, online at home since Mar1970

The big difference between the use of mortgage-backed securities
(toxic CDOs) in the S&L crisis and the current financial mess was
unregulated loan originators (regardless of their parent company)
being able to pay the rating agencies for triple-A ratings (when both
knew that the toxic CDOs weren't worth triple-A ratings ... from
fall2008 congressional hearing testimony).

As a result, they effectively had unlimited source of funding for
lending purposes and were able to unload the loans and eliminate all
their risk (eliminating any motivation to care about loan quality or
borrowers' qualifications). Recent suggestions are for loan
originators to retain some interest (and therefor risk) in their loans
(as a motivation to pay some attention to loan quality and/or
borrowers qualifications). However, that doesn't address the major
difference between the S&L crisis and currently ... the rating
agencies willing to "sell" triple-A ratings for the toxic CDOs.

fall2008 congressional hearing testimony claimed that rating agency
business processes were "mis-aligned". in theory, the ratings are
"safety and soundness" interest/benefit for the buyers. However, the
sellers are paying for the ratings ... which have interest in having
the highest possible price from the largest possible market (based on
the ratings) ... so the rating agencies were responding to the
interests of the sellers ... and selling "triple-A" ratings.

There has been some amount written about things would be much better
if business processes are correctly aligned and the parties are
motivated to do the right thing ... compared to business processes
being mis-aligned and the entities are motivated to do the wrong thing
(which makes regulation enormously more difficult when lots of the
players are constantly motivated to do the wrong thing).

--
virtualization experience starting Jan1968, online at home since Mar1970

2301 was almost a 2303 ... except it transferred data on four heads in
parallel (i.e. same capacity, 1/4th the number of tracks with four times
the capacity/track ... and four times the data transfer rate).

original cp67 install at the univ. in jan68 had fifo queuing for
moveable arm devices and 2301 drum ... and single operation at a time
execution. I modified the 2311/2414 disk support to add ordered arm seek
queuing (which would about double 2314 effective thruput under heavy
load) and did ordered, multiple page I/O chaining for the 2301 drum.

With single page i/o transfers on 2301 drum ... cp67 would saturate at
about 80 page i/os per second. With chained-requests, I could get peaks
approaching 300 page i/os per second (chained requests eliminated the
avg. 1/2 rotational delay on every page transfered).

U.S. operators take on credit cards with contactless payment trial

In the 90s, there was a lot of hype that the telcos were going to take
over the payment industry. Part of the scenario was that the telco
backends (scaled to handle call records) were the only ones that could
handle the anticipated micropayment volumes. Once they had dominance
with micropayments ... they would use the position to take-over the
rest of the payment industry. Some number of the telcos even built a
fairly good size issuing customer base ... and you would see telco
reps at various and sundry payment industry related meetings.

By the end of the 90s, all that had unraveled ... with the telcos
having unloaded their issuing portfolios. One postmortem was that the
telcos had been fairly use to customers skipping out on monthly bills
w/o a good industry process for followup and collection. Part of this
was that it was charges for infrastructure services ... but not
actually major out-of-pocket money (sort of cost of doing business
surcharge).

However, the credit card business resulted in real out-of-pocket money
transfers to merchants ... in merchant settlement. When customers
skipped out on payment of credit-card bills ... it was a much bigger
hit to bottom line ... than skipping out on monthly service
bill. Supposedly it was major culture shock to the telco players
dabbling in the credit card issuing business.

And, of course, the micropayment volumes never did take off.

Since then some of the players in payment processing have installed
backend platforms that were originally developed (and scaled) for the
telco call record volumes.

from above:
Gray is known for his groundbreaking work as a programmer, database
expert and Microsoft engineer. Gray's work helped make possible such
technologies as the cash machine, ecommerce, online ticketing, and
deep databases like Google.

... snip ...

aka a lot of the database work in area like transactions and commits
provided higher level of assurance in computer/electronic financial
records (for auditors ... as opposed to requiring paper/hardcopy
records).

when gray left for tandem ... he handed off some amount of consulting
with financial institutions (like BofA) to me.

smartcards were done in europe as "stored-value" cards (i.e. value was
theoretically resident in the card) and were designed to be used for
offline transactions ... compensating for the cost &/or lack of
connectivity in much of the world at the time.

the equivalent market niche in the US became the magstripe
stored-value, merchant, gift cards ... doing online transactions
(because of greater connectivity and lower cost structure in the US).

we had been asked to design, size, scale, cost the backend
dataprocessing infrastructure for possible entry of one of the major
european smartcards into the US. When we were doing this ... we looked
at the rest of the business & cost structure. Turns out that the
smartcard operator was making significant amount of money off the
float (value supposedly resident in the smartcards). Those smartcard
programs pretty much disappeared with 1) significant improvement in
connectivity and lowered telco costs ... changing trade-offs and 2)
after major european central banks published directive that the
operators would have to start paying interest on the smartcard stored
value (eliminating their major financial motivation).

in the 70s & 80s, financial backend batch processes had online
front-end financial transactions added (including ATM transactions)
... but the transactions still weren't actual finalized until the
batch cobol in overnight batch. in the 90s, there was severe stress on
the overnight batch window with increasing workload and globalization
(decreasing length of overnight batch window). numerous financial
institutions spent billions to re-engineer for straight-through processing
(i.e. each transaction runs to completion eliminating the
overnight batch window settlement/processing). The scenario was
parallel processing on large number of "killer micros" ... which would
offset the increased overhead involved in moving off batch. However,
there was little upfront speeds&feeds work done ... so it wasn't until
late in deployments that they discovered the technologies (they were
using), had overhead inflation of 100 times (compared to cobol batch),
totally swamping the anticipated thruput from large numbers of
parallel killer micros.

The failures resulted in huge retrenchment and very risk adverse
environment in the financial industry that still lingers on
(contributed significantly to preserving major portion of existing big
iron/mainframe market). A couple years ago there was attempts to
interest the industry in brand-new real-time, straight-through
transaction processing technology ... which only had 3-5 times the
overhead of batch cobol (easily within the price/performance of
parallel "killer micros") ... but the spectre of the failures in the
90s was still casting a dark shadow on the industry.

--
virtualization experience starting Jan1968, online at home since Mar1970

from above:
Facebook has countered that it picked Oregon because of its dry and
temperate climate. That allows it to use a technique called evaporative
cooling to keep its servers cool, instead of a heavy mechanical
chiller. Facebook says the data center will be one of the most
energy-efficient in the world.

... snip ...

backside of western mountain ranges tend to have "rain shadow"
... semi-arid or arid regions where coastal winds have dropped moisture
on the western slopes. See that on eastern slopes of the cascades,
rockies, sierras; see it to lesser extent on the back side of the
(lower) santa cruz mountains.

Several of the mega-datacenters have gone into the washington/oregon
region ... also has large amounts of water (just not in the air) and
hydro-electric power.

History of Hard-coded Offsets

rfochtman@YNC.NET (Rick Fochtman) writes:
At NCSS we devised a scheme to use 2305 devices for paging. We figured
3 pages per track and we inserted a "gap record" between the
pages. Thus we were able to fetch all three pages, from three
different exposures, in a single revolution of the device. Ditto for
writing a page as well. A guy named Grant Tegtmeier was the "mover and
shaker" behind this scheme, as well as some other DASD modifications
that also made huge differences in overall performance. Last I knew,
he was out in Silicon Valley and I'd sure like to contact him again,
for old times' sake.

use of gap records were standard on 2305 (2305 track had more than
enough room for the dummy records) and (sort-of) on 3330.

2305 had multiple exposures and it was also possible to dedicate a
specific exposure to all requests for record at specific rotational
position ... eliminating chained requests having to process a (chained)
seek head CCW in the rotational latency between the end of one record
and the start of the following record (small dummy records were used to
increase the rotational latency between the end of the preceeding page
record and the start of the next page record ... allowing time for the
processing of the chained seek head). In any case, chained requests
amortized the overhead of i/o initiation and interrupt processing across
multiple page transfers ... while startio/interrupt per request (using
multiple exposures) could improve responsiveness (at the cost trade-off
of more overhead).

the dynamic adatpive resource manager (sometimes called the fairshare
scheduler because default resource policy was fairshare), page
replacement algorithms, request chaining (for 2301 & 2314) and ordered
seek (for 2314) that I did as undergraduate was picked up and relased in
cp67. in the morph from cp67 to vm370 ... a lot of that stuff got
dropped. SHARE was lobbying that I be allowed to put a bunch of the
stuff back into vm370.

With the failure of future system ... most internal groups had been
distracted ... allowing 370 software & hardware product pipelines to
go dry ... there was a mad rush to get stuff back into the 370 product
pipeline. misc. past posts mentioning future system
http://www.garlic.com/~lynn/submain.html#futuresys

in any case, the mad rush to get stuff back into the 370 product
pipeline ... tipped the scales allowing bits & pieces of stuff to be
released, that I had been doing ... including the "resource manager"
(had a whole lot more stuff than straight dynamic adaptive resource
manager, also was the guinee pig for starting to charge for kernel
software). misc. past posts mentioning resource manager
http://www.garlic.com/~lynn/subtopic.html#fairsharemisc. past posts mentioning paging & virtual memory
http://www.garlic.com/~lynn/subtopic.html#wsclock

there was some issues with whether 2305s could really operate on 158s
(because of integrated channel overhead) at the standard specified
channel cable distances. had some number of poorly performing 158s where
turns out that 2305s were not doing three transfer per rotation ... but
taking additional rotations. Things would improve when channel lengths
were shortened.

The big problem in this area was 3330 ... 3330 track didn't officially
allow for big enough dummy record between three 4k records (to allow for
seek head to be inserted to switch track between end of one page and the
start of the next).

again the real problem was with 158 and latency/overhead in the
integrated channel processing. I did a whole series of tests across a
number of different processors (148, 4341, 158, 168, 303x, & some clone
processorsetc), 3330 controller vendors (not just IBM), and block sizes
(looking for threshold where seek head could be processed within the
rotational latency for specific block size ... i.e. start with smallest
possible dummy block ... perform the rotational transfer tests, then
increase size ... looking for minimum dummy block size that could
transfer three pages ... all on different tracks ... in one rotation).

most of the clone 3330 disk controllers were faster (required smaller
dummy block size) than standard 380. The 148, 4341, and 168 ... were all
much better than the 158. All the 303x processors exhibited the same
characteristic as the 158 ... since the channel director used for all
303x processors was a 158 engine with just the integrated channel
microcode (and w/o the 370 microcode). I still even have the assembler
program some place that I used for all the test.

Region Size - Step or Jobcard

m42tom-ibmmain@YAHOO.COM (Tom Marchant) writes:
More precisely, MVT had a single address space

aka VS2/SVS was minimally modified MVT in a single (16mbyte) virtual
address space; biggest change was borrowing ccwtrans from cp67 for EXCP
... to take the application-passed channel program and make a copy of it
... substituting real addresses for virtual addresses.

there was a specially modified MVT os/360 relase 13 done at boeing
huntsville. MVT storage/memory management became heavily fragmented with
long running jobs. boeing huntsville had a pair of 360/67s for long
running cad/cam jobs with 2250 vector graphics under MVT OS/360 release
13. MVT release 13 was modified to use the 360/67 virtual memory
hardware to reoganize storage/memory locations (compensating for
enormous storage fragmentation). there was no paging going on ... just
address translation.

MVS kernel image was made half of each 16mbyte application virtual
address space (to simplify pervasive use of pointer-passing APIs).

The problem manifested itself when all the subsystems were also moved
into their own separate, individual, virtual address spaces ... now
the pointer-passing API between applications and subsystems started to
break down. The solution was the "common segment" ... a part of every
virtual address space that could have dedicated areas for moving
parameters into so as to not break the pointer passing API
paradigm. The problem was that the demand for common segment area grew
as the size of systems and number of subsystems grew. Some large MVS
shops were even facing the possibility of moving from 5mbyte common
segment to 6mbyte common segment ... reducing maximum application area
to 2mbytes.

Burlington ... internal chip foundary and big internal MVS shop had an
enormous problem ... with major fortran applications that were
constantly threatening to exceed 7mbytes and lots of carefully crafted
MVS systems that maintained the common segment at 1mbyte. Move off MVS
to CMS would have eliminated a whole lot of effort that was constantly
going into keeping their applications working in the MVS environment
... but that would have been an enormous blow to MVS prestige and image.

--
virtualization experience starting Jan1968, online at home since Mar1970

Age

Walter Bushell <proto@panix.com> writes:
Heinlein did, for one. Just like our recent economic collapse, anyone
interested should have seen it coming. However, there was no money in
forecasting either and lots of money in remaining oblivious.

... lots of money in playing ball.

couple years ago there was interview with successful person on wallstreet
... saying that the large successful operations have been gaming the
system for years ... and considered it had little or no downside,
something about the people at SEC are so ignorant that they would never
be able to figure out what was going on.

one might make some parallel with successful parasites ... that have to
know how to maximize the blood they can suck w/o severely impacting the
host (however, they can periodically get carried away).

--
virtualization experience starting Jan1968, online at home since Mar1970

i.e. MVS was still mostly the same address space ... or every MVS
virtual address space was mostly the same (some installations, 13mbytes
& threatening to become 14mbytes, of every 16mbyte virtual address
space) ... in large part because of the ingrained, pervasive
pointer-passing API paradigm.

--
virtualization experience starting Jan1968, online at home since Mar1970

We had been asked to come in and consult with a small client/server
startup that wanted to do payment transactions on their server; the
startup had also invented this technology called "SSL" they wanted to
use; the result is now sometimes called "electronic commerce".

Somewhat as a result, in the mid-90s, we were invited to participate
in the x9a10 financial standard working group which had been given the
requirement to preserve the integrity of the financial infrastructure
for ALL retail payments (i.e. ALL, credit, debit, stored-value,
point-of-sale, face-to-face, unattended, remote, internet, wireless,
contact, contactless, high-value, low-value, transit turnstile, aka
ALL). The result was x9.59 financial standard payment protocol. One
of the things done in the x9.59 financial standard was slightly tweak
the current paradigm to eliminate risk of using information from
previous transactions (like account numbers) for a form of
"replay-attack" ... i.e. performing fraudulent financial
transactions. This didn't eliminate skimming, evesdropping, data
breaches, etc ... it just eliminated the risk from such events ... and
the possibility of fraudulent financial transactions as a result.

Now, the major use of SSL in the world today ... is this earlier work
we had done for payment transactions (frequently called "electronic
commerce") ... to "hide" transaction detail/information. With x9.59,
it is no longer necessary to hide such information (as a
countermeasure to fraudulent financial transactions) ... so X9.59 also
eliminates the major use of "SSL" in the world today.

--
virtualization experience starting Jan1968, online at home since Mar1970

Age

greymausg writes:
Amazingly enough, from memory, the population almost
doubled during that time, so there is something that the
Vietnamese are better at than war.

not necessarily ... there are been instances of big population jumps in
various parts of the world after introduction of newer medicines and
medical techniques (doesn't require change in birth rates ... just
reduction in mortality with corresponding increase in avg life
expectency). in other places, this has sometimes resulted in subsequent
big mortality spikes from starvation, when population explosion (from
the introduction of modern medicine) gets out-of-kilter with what the
environment is otherwise able to support.

--
virtualization experience starting Jan1968, online at home since Mar1970

Note that the first shipped relational dbms was from the Multics group
... which was located on 5th flr of 545 tech sq (science center,
virtual machines, internal network, GML, etc ... was on 4th flr of 545
tech sq).

The version of Zeus detected by Trend Micro had a digital certificate belonging
to Kaspersky's Zbot product, which is designed to remove Zeus. The certificate --
which is verified during a software installation to ensure a program is what it
purports to be -- was expired, however.

there was another scenario of certificate-copying (& dual-use
vulnerability) discussed in this group a while ago. The
PKI/certificate bloated payment specification had floated the idea
that that when payment was done with their protocol, dispute
burden-of-proof would be switched & placed on the consumer (from the
current situation where burden-of-proof is on the
merchant/institution; this would be a hit to "REG-E" ... and also
apparently what has happened in the UK with the hardware token
point-of-sale deployment).

However, supposedly for this to be active, the payment transaction
needed a consumer appended digital certificate that indicated they
were accepting dispute burden-of-proof. The issue was whether the
merchant could reference some public repository and replace the
digital certificate appended by the consumer ... with some other
digital certificate for the same public key (possibly digital
certificate actually obtained by the consumer for that public key at
some time in the past ... or an erroneous digital certificate produced
by a sloppy Certification Authority that didn't adequately perform
check for applicant's possession of the corresponding private key).

Of course, since the heavily bloated PKI/certificate payment
specification, performed all PKI-ops at the internet boundary ... and
then passed a normal payment transaction with just a flag claiming
that PKI-checking had passed ... they might not need to even go that
far. There was already stats on payment transactions coming thru with
the flag on ... and they could prove no corresponding PKI-checking had
actually occurred. With the burden-of-proof on consumer ... the
merchant might not even have to produce evidence that the appended
digital certificates had been switched.

--
virtualization experience starting Jan1968, online at home since Mar1970

from above:
FinCen is addressing concerns that because prepaid cards and similar
devices can be easily obtained and are anonymous, they are attractive
for money laundering, terrorist financing, and other illegal
activities. Under the rules, providers of prepaid access would be
required to meet the same registration, suspicious-activity reporting,
customer-information recordkeeping, and new transactional record
keeping requirements as banks.

... snip ...

Recently there were articles about too-big-to-fail financial
institutions participating in illegal drug money laundering;
discovered when money trail (involved in buying planes for drug
smuggling) was followed. Apparently since the gov. was already doing
everything possible to keep the institutions afloat, rather than
prosecuting, sending the executives in jail, and shutting down the
institutions ... they just asked that they promise to stop doing it.

Idiotic programming style edicts

Huge <Huge@nowhere.much.invalid> writes:
I've been in IT(*) since 1975, and I'm watching the "centralised/distributed"
changeover coming round for the third time.

43xx & vax sold into the mid-range market about in equal numbers, big
difference for 43xx was large commercial customers putting orders in for
multiple hundreds of 43xx machines. The mid-range machines opened up new
frontiers with price/performance and being able to put them into
(converted) conference rooms and departmental supply rooms. some number
of large datacenters were bursting at the seams and didn't have the
floor space for adding new stuff ... and adding datacenter space was
major undertaking & expense).

various old 43xx emails ... some discussing reaching datacenter limits
for traditional mainframes ... and using 43xx machines for expansion
... putting them out in local environment with much less upfront
infrastructure expense (required for tradtional datacenter mainframes).
http://www.garlic.com/~lynn/lhwemail.html#4341

above has some references to highend, "big iron" possibly fealing heat
from mid-range starting to take some of their business ... and some
resulting internal politics; aka get five 4341s for lower aggregate
price of 3033 and get more aggregate MIPs, more aggregate real storage,
more aggregate i/o capacity (and not require the heavy datacenter
investment).

major effort in Medusa was increasing the number of "commoditity"
processors that could be put into single rack (and interconnecting large
numbers of racks) ... and the physical issues with handling the heat;
significantly increasing computing per sq/ft and scaling it up. This had
been a continuing theme ever since ... morphing into GRID computing and
now CLOUD computing.

The '85 "SPM" email was about NIH & product group going thru some
number of iterations producing a series of subsets of SPM (iucv, smsg,
etc) ... when they could have shipped the full SPM superset at the
outset.

--
virtualization experience starting Jan1968, online at home since Mar1970

recent post about security proportional to risk ... merchants
interest in the transaction (information) is proportional to profit
... possibly a couple dollars ... and processors interest in the
transaction (information) is possibly a couple cents ... while the
risk to the consumer (and what the crooks are after) is the credit
limit &/or account balance ... as a result the crooks may be able to
outspend (attacking the system) the merchant/processors (defending the
system) by a factor of 100 times.
http://www.garlic.com/~lynn/2010l.html#70

Big part of the current paradigm involving SSL, PCI, data breaches,
skimming, etc .... is that information from previous transactions can
be used by crooks for fraudulent financial transactions ... basically
a form of replay attack or replay vulnerability.

The X9.59 financial transaction standard slightly tweaked the existing
paradigm to eliminate the replay attack vulnerability ... which
eliminated having to hide account numbers and/or hide information from
previous transactions ... which eliminated the threat from data
breaches, skimming, and/or other forms related to criminals harvesting
such information for the purposes of performing fraudulent financial
transactions.

--
virtualization experience starting Jan1968, online at home since Mar1970

besides the issue of motivating institutions to "protect" vulnerable
consumer information ... there is a lot of difficulty (enormous
expense and lots of cracks) with attempting to prevent misuse of
(vulnerable) consumer/transaction information that is widely
distributed and required in large number of business processes. The
x9.59 assertion is rather than attempting to plug the millions of
possible leaks (to prevent the information from falling into the hands
of crooks), it is much more effective to tweak the paradigm and
eliminate the crooks being able to use the information for fraudulent
transactions

--
virtualization experience starting Jan1968, online at home since Mar1970

Idiotic programming style edicts

Anne & Lynn Wheeler <lynn@garlic.com> writes:
43xx & vax sold into the mid-range market about in equal numbers, big
difference for 43xx was large commercial customers putting orders in for
multiple hundreds of 43xx machines. The mid-range machines opened up new
frontiers with price/performance and being able to put them into
(converted) conference rooms and departmental supply rooms. some number
of large datacenters were bursting at the seams and didn't have the
floor space for adding new stuff ... and adding datacenter space was
major undertaking & expense).

past post with decade of vax numbers, sliced and diced by year, model,
us/non-us, etc ... by the mid-80s ... much of the mid-range market was
starting to give way to workstations and large PCs.
http://www.garlic.com/~lynn/2002f.html#0 Computers in Science Fiction

--
virtualization experience starting Jan1968, online at home since Mar1970

What will Microsoft use its ARM license for?

Andy Glew <"newsgroup at comp-arch.net"> writes:
One of the original motivations of RISC was that a regular,
orthogonal, instruction set might be easier for compilers to deal
with.

Now the line goes that a compiler can deal with an irregular,
non-orthogonal, instruction set.

the other scenario ... I've periodically claimed that John's motivation
was to go to the opposite complexity extreme of the (failed) future
system effort for 801 in the mid-70s ... not only simplifying
instruction set ... but also making various hardware/compiler/softare
complexity trade-offs ... decreasing hardware complexity and
compensating with more sophisticated compilers and softare.

another example was lack of hardware protection (reduced hardware
complexity) compensated for by compiler that only generated correct code
... and closed operating system that would only load correct programs.

this was the displaywriter follow-on from the early 80s with romp
(chip), pl.8 (compiler) and cp.r (operating system). when that product
got killed, the group looked around for another market for the box and
hit on the unix workstation market. they got the company that did the
port to the pc for pc/ix ... to do one for their box ... and marketed it
as aix & pc/rt. one issue was that the unix & c environment is
significantly different than "only correct programs" and "closed
operating system" from the original design (requiring at least some
additions to the hardware for the different paradigm/environment).

... trivia ... predating romp was effort to replace the large variety of
internal microprocessors (used in controllers and for low/mid range
processor engines) with 801 ... some number of 801 Iliad chips
configured for that purpose.

an example was the original as/400 (replacing the s/38) was going to be
801 iliad chip ... but when that ran into trouble ... a custom cisc
chip was quickly produced for the product. as/400 did finally move off
cisc to 801 power/pc varient a decade or so later.

--
virtualization experience starting Jan1968, online at home since Mar1970

Hardware TLB reloaders

rpw3@rpw3.org (Rob Warnock) writes:
ISTR that the reason the 370 had (at least) 4-way associative TLBs
was that there were certain instructions that could not make forward
progress unless there were *eight* pages mapped simultaneously, of
which up to four could collide in the TLB. The famous example of such
was the Translate-And-Test instruction in the situation in which the
instruction itself, the source buffer, the destination buffer, and
the translation table *all* spanned a page boundary, which obviously
needs valid mappings at least eight pages. [But only four could collide
per TLB line.]

minor trivia ... translate, transate-and-test were against the source
buffer ... using the translation table/buffer (which could cross page
boundary). the two additional possible page references was that instead
of executing the instruction directly ... the instruction could be the
target of an "EXECUTE" instruction; where the 4-byte EXECUTE instruction
might also cross page boundary

the feature of the execute instruction was that it would take a byte
from a register and use it to modify the 2nd byte of the target
instruction for execution ... which in SS/6-byte instructions was the
length field; eliminating some of the reasons for altering instructions
as they appeared in storage).

note that 360/67 had an 8-entry associative array as the translate
look-aside hardware ... in order to handle the worst case instruction
(eight) page requirement.

more trivia ... in more recent processors ... translate & translate and
test have gotten a "bug" fix and became much more complex.

360 instructions always pretested both the origin of an operand and the
end of an operand for valid (in the case of variable length operand
specification ... used the instruction operand length field ... or in
case of the execute instruction, the length supplied from register)
... before beginning execution ... in the above example might have
multiple page faults before the instruction execution would actually
start.

370 introduced a couple instructions that would execute incrementally
(MVCL & CLCL) ... although there were some early machines that had
microcode implementation bugs ... that would pretest the end of
MVCL/CLCL operand before starting execution.

relatively recently a hardware fix was accepted for translate &
translate & test instructions. the other variable length 6-byte SS
instructions have both the source operand length and the destination
operand length identical. translate and translate&test instructions have
a table as operand that uses each byte from the source to index. The
assumption was that the table was automatically 256 bytes and therefor
instruction pre-test would do check for valid for start-of-table plus
255.

it turns out that page fault when cross page boundary ... there is
possibility of storage protect scenario on page boundary crossing. that
coupled with some applications that would do translate on subset of
possible values ... and only built a table that was much smaller than
256 bytes. If the table was at the end of a page ... abutting a storage
protect page ... the end-of-table precheck could fail ... even tho the
translate data would never actually result in reference to protected
storage.

so now, translate and translate&test instructions have a pretest if the
table is within 256 bytes of page boundary ... if not ... it executes as
it has since 360 days. if the target table is within 256 bytes of the
end of page ... it may be necessary to execute the instruction
incrementally, byte-by-byte (more like mvcl/clcl)

--
virtualization experience starting Jan1968, online at home since Mar1970

RISC design, was What will Microsoft use its ARM license for?

John Levine <johnl@iecc.com> writes:
Meanwhile in a small town in suburban New York, the 801 project was
using the PL.8 compiler with state of the art analysis and
optimization. It dealt just fine with somewhat irregular instruction
sets (it generated great code for S/360) and had its registers firmly
under control, so the 801 reflected that.

for the fun of it ... old email with comparison between various pascals
and pl.8 with pascal front-end (on same pascal program) ... includes
some with execution on the same 3033 (a 4.5mip "high-end" 370):
http://www.garlic.com/~lynn/2006t.html#email810808

A Bright Future for Big Iron?

big iron has steep learning curve and big upfront costs &
expenses. there has been lots of stuff written about drop-off in
educational discounts in the 70s ... and institutions of higher
learning moving to other platforms. as a result, when the graduates
from that period started coming to age ... their familiarity was with
other platforms.

another explanation was that it was much easier for departments to
cost justify their own mid-size or personal computers ... and with
much simpler learning curve ... it was easier to structure a course
around some material and show productive results.

however, a big part of the mid-range 43xx numbers was move into
distributed computing with large corporations having orders of several
hundred 43xx machines at a time. even at internal locations ... there
was big explosion in 43xx machines ... which contributed to the
scarcity of conference rooms during the period; department conference
rooms being converted to house department 43xx machines ... example
was Santa Teresa lab with every floor in every tower getting 43xx
machine.

The explosion in internal 43xx machines also was major factor in the
internal network passing the 1000 node mark in 1983 ... aka internal
network was larger than the arpanet/internet from just about the
beginning until possibly late '85 or early '86. some past posts
mentioning internal network
http://www.garlic.com/~lynn/subnetwork.html#internalnet

During the late 80s ... with proliferation of distributed computing on
workstations and PCs ... the limited connectivity into the datacenter
played big role. There were several products vastly improving big iron
distributed computing (from the disk division), which were blocked by
the communication division ... claiming strategic responsibility
(i.e. it threatened the communication division terminal emulation
install base). Finally at the annual world-wide communication division
internal conference, one of the senior disk engineers got a talk
scheduled ... and opened it with the statement that the communication
division was going to be responsible for the demise of the disk
division. some past posts mentioning terminal emulation
http://www.garlic.com/~lynn/subnetwork.html#emulation

--
virtualization experience starting Jan1968, online at home since Mar1970

a large reservoir of big iron has been the financial industry ... back
ends with huge amounts of financial data. in the 90s, this was
threatened by "killer micros".

in the 70s & 80s, backend batch processes had online front-end
financial transactions added ... but the transactions still weren't
actual finalized until the batch cobol in overnight batch. in the 90s,
there was severe stress on the overnight batch window with increasing
work and globalization decreasing length of overnight batch
window. numerous financial institutions spent billions to re-engineer
the backend for straight-through processing (i.e. each transaction
runs to completion eliminating the overnight batch window
processing). The scenario was parallel processing on large number of
"killer micros" which would offset the increased overhead involved in
moving off batch. However, there was little upfront speeds&feeds work
done ... so it wasn't until late in deployments that they discovered
the technologies (they were using), had overhead inflation of 100
times (compared to cobol batch), totally swamping the anticipated
thruput from large numbers of parallel killer micros.

The failures resulted in huge retrenchment and very risk adverse
environment in the financial industry that still lingers on
(contributed significantly to preserving major portion of existing big
iron market). A couple years ago there was attempts to interest the
industry in brand-new real-time, straight-through transaction
processing technology ... which only had 3-5 times the overhead of
batch cobol (easily within the price/performance of parallel "killer
micros") ... but the spectre of the failures in the 90s was still
casting a dark shadow on the industry.

from above:
Security vendor M86 Security says it's discovered that a U.K.-based
bank has suffered almost $900,000 (675,000 Euros) in fraudulent
bank-funds transfers due to the ZeuS Trojan malware that has been
targeting the institution

We had been brought in as consultants to small client/server startup
that wanted to do payments on their server. The startup had also
invented this technology they called SSL, they wanted to use. The
result of that work is now frequently called electronic
commerce. There were some number of requirements regarding how SSL was
deployed and used that were almost immediately violated.

Somewhat as a result of that work, in the mid-90s, we were invited to
participate in the x9a10 financial standard working group that had
been given the requirement to preserve the integrity of the financial
infrastructure for all retail payments. The result was the x9.59
standard for debit, credit, stored-value, electronic ACH, high-value,
low-value, point-of-sale, internet, unattended, face-to-face, contact,
contactless, wireless, transit-turnstile, ... aka ALL. It also
slightly tweaked the paradigm so it eliminated the threats from
evesdropping, skimming and data breaches (it didn't eliminate
such activity, it just eliminated the threat/risk that crooks could
use the information to perform fraudulent transactions).

About the same time, there were a number of presentations by consumer
dial-up online banking operations about motivation for the move to
internet (eliminate significant customer support costs with
proprietary dial-up infrastructures). At the same time, the
commercial/business dial-up online cash-management/banking operations
were making presentations that they would never move to the internet
(because of the risks and vulnerabilities involved, even with SSL).

for a long time, it has been widely understood that PCs are easily
compromised in large number of different ways. while x9.59 standard
addressed the issue of crooks being able to use information to
independently perform fraudulent transactions ... there were also a
class of PC compromises where the user was convinced to
authenticate/authorize a transaction that was different than what they
believed it to be.

somewhat as a result, in the last half of the 90s, there was the EU
FINREAD standard ... which was an independent box attached to the PC
that had independent display and input and would generate authorized
transaction that would run end-to-end (from FINREAD to the financial
institution). The compromised PC could still do DOS ... but FINREAD
eliminated an additional class of fraud involving compromised PCs (in
theory a "locked down" cellphone/PDA might provide similar
functionality, wirelessly).
http://www.garlic.com/~lynn/subintegrity.html#finread

the problem was that in the early part of this century, there was
large pilot involving device that attached to PC thru the serial-port,
that provided authenticated transaction capability. This quickly ran
into enormous consumer support costs with serial-port conflicts
... resulting in widely spreading pervasive opinion in the financial
industry, that such attachments weren't practical in the consumer
market. This resulted in nearly all such programs evaporating
(including EU FINREAD).

However, the issue wasn't with the attached technology ... it was with
using serial-port interface. This was also a major issue with the
consumer dial-up proprietary online banking moving to the internet
(all the consumer support problems with serial-port dial-up
modems). Apparently all the institutional knowledge regarding
serial-port conflicts and enormous consumer support issues managed to
evaporate in a five year period. Also, it didn't need to have
occurred, by that time, there was USB and a major part of USB was to
address all the serial-port issues and problems ... and the pilot
might have used USB attachment rather than serial-port.

oh, recently there has been some recommendations that businesses that
do online internet banking ... get a PC that is dedicated to the
function and *NEVER* used for anything else; this would at least
address some of the issues raised in the mid-90s why
commercial/business dial-up online banking was never going to move to
the internet (at least that was what they were saying at the time).

--
virtualization experience starting Jan1968, online at home since Mar1970

CPU time variance

eamacneil@YAHOO.CA (Ted MacNEIL) writes:
The simple explanation is, during one of the MVS/SP1.x releases, some
things that were done in disabled mode, and under SRB reported CPU,
were done in disabled mode and TCB which was allocated to the last
active task.

MVS didn't actually directly account for lots of activity ... so percent
processor activity captured could possibly be as low as 40% (of total
processor activity). Some of the subsystem intensive operations ... that
attempted to do everything possible to avoid MVS ... could get captured
processor activity up to 80% or so (only 20% of the MVS processor cycles
unaccounted for). Operations that billed for useage would potentially
take overall capture ratio (as percent of overall total processor
useage) ... and prorate all useage by that amount.

VM accurately accounted for all useage (didn't have the concept of
things like "uncaptured" processor useage). The variability that VM
operation might have would be things like concurrent activity resulting
in task switching rates ... which affect things like cache hit ratio.

Processor thruput (number of instructions executed per second) is
sensitve to processor cache hit ratios ... actually frequency that
instructions stall waiting for data to be fetched from main
storage/memory (because the data isn't in the cache). With increasing
processor speeds w/o corresponding increase in storage/memory speeds
... there is increased mismatch between processor execution speeds and
the slow-down that happens when waiting for data not in the cache.

When there is lots of task switching going on ... much of the data in
the processor cache may have to be changed in the switch from one task
to another ... resulting in very high cache miss rate during the
transition period (significantly slowing down the effective processor
execution rate and increasing the corresponding measured CPU utilization
to perform some operation).

Over the past couple decades ... the instruction stall penalty has
become more and more significant ... resulting in lots of new
technologies attempting to mask the latency in waiting for data to be
fetched from memory/storage (on cache miss) & keep processor doing
something ... aka things like deep pipelining and out-of-order execution
... which may then also result in branch prediction and speculative
execution.

in any case, variability in task switching and other concurrent activity
(that might occur in vm environment) ... can result in variability in
cache miss rate ... and therefor some variability in effective
instructions executed per second (i.e. variabiilty in elapsed cpu used
to perform some operation).

LPAR activity (effectively a subset of VM function in the hardware)
could also have similar effects on cache miss rates.

--
virtualization experience starting Jan1968, online at home since Mar1970

Predicting that that a technology will be supplanted is easy.
Accurately predicting what will replace it and when is hard.

lot of DBMS are disk-centric and based on the home position for data is
located on disk ... and real memory/storage is used to cache records.

with increase in real memory sizes, lots of databases can completely fit
in real storage. there has been work on "in memory" databases ... big
motivation was the telco industry ... being able to handle call record
volumes. there were some benchmarks of these "in memory" databases that
might use disks for sequential writing involving logging/recovery ...
against standard DBMS that had enough memory/storage to completely cache
the full DBMS ... and the "in memory" database still got ten times the
thruput ... than disk-centric DBMS (even with all data cached).

In the middle/late 90s, there was prediction that the telcos would take
over the payment industries ... because the telcos were the only
operations (based on scaleup to handle call-record volumes) that were
capable of handling the predicted volumes in micro-payments. Once firmly
entrenched handling micro-payment volumes ... they would then move
upstream to take over the rest of the payment industry. Well, the
micro-payment volumes never materialized ... and the telco industry has
yet to take over the payment industry.

However since then, Oracle has acquired at least one of the "in-memory"
DBMS implementations ... and some of the payment processors have
deployed backend platforms originally developed to handle the telco call
record volumes.

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3883 Manuals

wmhblair@COMCAST.NET (William H. Blair) writes:
I've never seen this documented. But I never looked that deeply,
either, so it might have been in my face since 1981. Regardless,
I was told it was only for purposes of allocating space on the
actual track -- AS IF the device actually wrote 32-byte (the
cell size) physical blocks (or multiples thereof). At the time,
prior to PCs, this meant nothing special to me. Of course fixed
sector sizes for PC drives made more sense, and I assumed the
underlying 3380 and 3375 hardware, like the 3370, used a fixed
block [or sector] size, which obviously had to a multiple of 32.
Later, I was told (by IBM) that this was, in fact, the case.

i remember that the 32byte data error correcting block was also the
physical block on disk ... but I haven't found a documented reference to
that effect.

as I mentioned before ... 3380 was high-end ... while 3370 was
considered mid-range.

there was big explosion in the mid-range market with 43xx machines ...
which MVS didn't fit well into. possibly somewhat as part of helping MVS
possibly sell into that 43xx/midrange exploding market ... there was
3375 ... which emulated CKD on 3370 (there was also a major effort to
get the MVS microcode assist from the 3033, required by all the new
releases of MVS, implemented on 4341). recent post mentioning 3375
gave MVS a CKD disk for mid-range:
http://www.garlic.com/~lynn/2010l.html#13 Old EMAIL Index

at the time, 3310/3370 were sometimes referred to as FBA-512 (rather than
simply FBA) ... implying plans for moving to even larger block
sizes. recent post referencing current FBA is looking at moving from 512
(with 512 byte data block error correcting) to 4096 (with 4096 byte data
block error correcting)
http://www.garlic.com/~lynn/2010m.html#1 History of Hard-coded Offsets

the above has some articles about various FBA-512
compatibility/simulation efforts (for FBA-4096) ... very slightly
analogous to CKD simulation on FBA.

Of course, VM had native FBA support and didn't have problems with
low-end and mid-range markets. Part of this was that both VM kernel
paging ... and the CMS filesystem organization (dating back to the
mid-60s) have been logical FBA ... even when having to deploy on
physical CKD devices.

I've made past references that with the demise of FS that there was
mad rush to get stuff back into the 370 hardware & software
pipeline ... and it was going to take possibly 7-8 yrs to get XA
(starting from time FS was killed). The MVS/XA group managed to make
the case that VM370 product had to be killed, the development group
shutdown and all the people moved to POK in order for MVS/XA to make
its ship schedule (endicott managed to save the vm370 product mission,
but had to reconstitute a development group from scratch). misc. past
posts mentioning future system effort
http://www.garlic.com/~lynn/submain.html#futuresys

Later in the 80s, POK decided to come out with vm/xa "high-end" product.
However, there was some interesting back&forth about whether VM/XA would
include FBA support ... they were under pressure to state that CKD was
much better than FBA and that was why FBA support wasn't needed (as part
of supporting MVS lack of FBA support).

rfochtman@YNC.NET (Rick Fochtman) writes:
At Clearing, we ran MVS very nicely on three 4341 Model Group 2 boxen
for three years and it ran very nicely. Nowdays, my pocket calculator
probably has more raw compute power but the fact remains that we were
very happy with the equipment, until our workload grew beyond their
capacity to process it. IIRC, the DASD farm was a mix of 3330-11's and
3350's. Talk about ancient.....

4341 was approx. 1+mip processor ... much better price/performance than
(slower) 3031 ... and cluster of 4341s were cheaper, better
price/performance and had higher aggregate mip rate than 3033 (there
were some internal politics trying to protect high-end sales from being
eaten by mid-range).

bldg. 14&15 were running stand-alone processors for testcell testings.
at one point they tried to run with MVS ... but found MVS had 15min MTBF
(fail, hard loop, something requiring reboot). I offered to rewrite IOS
to make it bullet proof and never fail ... allowing them to do ondemand,
anytime, multiple concurrent testing (from around the clock, 7x24
scheduled stand alone test time) ... significantly increasing
development productivity. this also got me dragged into various sorts of
solving other kinds of problems ... as well as being dragged into
conference calls with channel engineers back east.

so bldg. 14&15 typically got the next processor off the line ... after
the processor engineers (4341s, 3033s, etc). one of the benefits of
having provided the environment for engineering ... I could have much of
the remaining processor time (not required by development testing
... which was very i/o intensive and rarely accounted for more than
couple percent ... even with multiple concurrent testcells).

for totally unrelated ... there was recent thread about (mid-90s) having
rewrote major portion of airline reservation so that it ran possibly 100
times faster and sized it so that it could handle every reservation for
every airline in the world. as noted in the thread ... the aggregate MIP
rate sizing is about equivalent to what is available in cellphone today
http://www.garlic.com/~lynn/2010b.html#80 Happy DEC-10 Day
http://www.garlic.com/~lynn/2010c.html#19 Processes' memory

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3883 Manuals

rfochtman@YNC.NET (Rick Fochtman) writes:
At Clearing, we ran MVS very nicely on three 4341 Model Group 2 boxen
for three years and it ran very nicely. Nowdays, my pocket calculator
probably has more raw compute power but the fact remains that we were
very happy with the equipment, until our workload grew beyond their
capacity to process it. IIRC, the DASD farm was a mix of 3330-11's and
3350's. Talk about ancient.....

... group 2 was faster machine introduced later ... however, if you were
running with (existing?) DASD farm with mix of 3330-11s and 3350s ... it
was possibly upgrade of existing 370 machine (possibly single 158 to
three 4341 ... or maybe from a single 168?). it might have even been an
pre-existing MVS (that didn't require the new 3033 mvs microcode assist)
... and likely within a traditional looking datacenter.

... but this wasn't a couple machines ... part of the explosion in
mid-range were customers buying hundreds of 4341s at a time (which
required new processors as well as new mid-range dasd) ... and placing
them all over the business ... converting deparmental conference rooms
and supply rooms for 4341s with "mid-range" dasd .... not exactly
someplace to load up a lot of 3380 disks cabinets (designed for the
datacenter).

another trivial example was the internal service processor for 3090
... it was a pair of (vm) 4361s with FBA (3090 service panels were
actually cms ios3270 screens)

internally installations, that vm/4341 mid-range explosion contributed
to scarcity of conference rooms ... with places like the santa teresa
labs putting vm/4341s on every floor in every tower.

the 3880 controller group in san jose, ran a huge microcode design
application on collection of 168s in bldg. 26 datacenter. The datacenter
was crammed to the walls and there wasn't any place to add more
machines. They started looking at also putting lots of these vm/4341s
out into all the departmental nooks & crannies ... as a means of
deliverying a lot more processing power to their development
organization (again all new installations requiring new mid-range dasd).

one of the things started in the 90s was very aggressive physical
packaging effort to configure ever increasing numbers of processors in
the smallest amount of floor space ... this was somewhat done to get
large numbers of the explosion in departmental and distributed
processors back into the datacenter.

Much of the early work had gone to national labs (like LLNL) and high
energy physics labs (as GRID computing). It has also recently morphed
into "cloud computing" ... somewhat marrying massive cluster scaleup
with the 60&70s mainframe online commercial timesharing ... much of it
virtual machine based (starting with cp67 and then morphing into
vm370) ... one of the largest such (mainframe virtual machine)
operations was the internal (virtual machine based) online HONE system
providing world-wide sales & marketing support ... misc. old HONE
references
http://www.garlic.com/~lynn/subtopic.html#hone

late 70s there was start of effort to move the large variety of internal
microprocessors to 801/risc (iliad chips) ... this included the
follow-ons to 4331/4344 (i.e. 4361/4381), the as/400 follow-on to the
s/38 ... and a lot of other internal microprocessors.

various issues cropped up with iliad chips ... and the effort was
abandoned ... 4361/4381 doing their own custom cisc chip, crash project
to do cisc chip for as/400 (decade later, as/400 did move to varient of
801/risc power/pc chip), etc. in the wake of abandoning that effort,
some number of 801/risc engineers leave and show up on risc efforts at
other vendors.

i contributed some to the whitepaper that killed the effort for 4381.
low/mid range were veritical microcode processors simulating 370
... somewhat akin to current day Hercules effort on intel processors.
The idea was to move to common 801/risc for microprocessors
... minimizing the duplication of effort around the corporation
developing new chips and associated (microcode) programming environment.

The whitepaper claims were that cisc technology had gotten to the stage
where much of 370 instructions could be implemented directly in circuits
(rather than emulated in microcode). That even with higher mip rate of
801/risc, there was still approx. 10:1 instruction emulation overhead
(needed 20mip microprocessor to get 2mip 370) ... while cisc chip might
only be 3-5 mips ... quite a bit of that could be 370 instructions
nearly native in the chip.

small piece from that whitepaper:
- The 4341MG1 is about twice the performance of a 3148. Yet the
4341MG1's cycle time is only about 1.4 times faster than the 3148's.
The rest of the performance improvement comes from applying more
circuits to the design.

- The 4341MG2 is about 1.6 times faster than the 4341MG1. Yet the
4341MG2's cycle time is 1.4 times faster than the 4341MG1's. Once
again, performance can be attained through more circuitry, not just
faster circuitry.

- The 3031 is about 1.2 times faster than the 3158-3. Yet the 3031's
cycle time is the same as the 3158-3's.

the 3031 reference is slight obfuscation. the 158-3 was single
(horizontal microcode processor) engine shared between the 370 microcode
and the integrated channel microcode. the 3031 was 158-3 with two
processor engines ... one dedicated to running 370 microcode (w/o the
integrated channel microcode) and one dedicated to the "303x channel
director" running the integrated channel microcode (w/o the 370
microcode).

recent reference to 158 engine with integrated channel microcode was
used for 303x channel director for all 303x processors (i.e. 3031 was
158-3 repackaged to use channel director, 3032 was 168-3 repackaged to
use channel director, and 3033 started out as 168-3 wiring diagram using
20% faster chip with channel director)
http://www.garlic.com/~lynn/2010m.html#15 History of Hard-coded Offsets

Part of the 801 story was that the simplification of 801 hardware could
be compensated for by sophistication in the software; pli.8 compiler and
cp.r monitor. for the fun of it ... recent reference to old benchmark
numbers of pl.8 with pascal frontend against pascal/vs (on 3033)
http://www.garlic.com/~lynn/2010m.html#35 RISC design, was What will Micrsoft use its ARM license for?

A New U.S. Treasury Rule Would Add Millions to Prepaid Ranks

is the situation that when you click on the linkedin URL ... it
doesn't bring up the actual article?

Linkedin tries to "frame" posted articles/URLs ... sometimes when
clicking on the linkedin flavor of a posted URL ... you get a webpage
that just has the linkedin header ... w/o the actual article appearing
in the frame below. Sometimes this is a browser configuration issue
... and sometimes it has been a linkedin issue. as posted above, the
original posted "real" URL is (w/o the linkedin "wrapper"):

which has a URL to this FinCen PDF file:
http://www.heritage.org/Research/EnergyandEnvironment/EM74.cfm

There have been a number of somewhat related news stories about money
laundering being traced back to some of the too-big-to-fail financial
institutions (the gov. was following the money trail used to buy some
airplanes used in drug smuggling ... and it led back to some large US
financial institutions). One point of those articles was that the
gov. has been doing so much to try and keep the associated
(too-big-to-fail) institutions afloat ... that rather than
prosecuting, throwing the executives in jail, and shutting down the
institutions ... the institutions have been asked to stop doing it.

--
virtualization experience starting Jan1968, online at home since Mar1970

Announcement from IBMers: 10000 and counting

I had started corporate online telephone book and email address files
in the late 70s ... I thought it was neat when my list of (internal)
email addresses passed 10,000. In the early days ... it was ok to have
both external and internal phone numbers on your business card; then
some of us started adding our internal email address to our business
cards. The corporation then came out with guideline that business
cards were only for customers and internal contact information wasn't
allowed on business cards. However, by that time, we had gateway to
the arpanet/internet and a few of us could put our external email
address on our business cards.

One of the frequent discussions from 1980 was about how the majority
of the corporation weren't computer users ... especially the
executives. It was believed that it might vastly improve decisions if
large percent of employees actually used computers ... and what kind
of inducement could we come up with to attract corporate employees to
using computers.

HONE had addressed some of that after the 23jun69 unbundling
announcement to provide world-wide online access to sales & marketing
(after US HONE datacenters were consolidated in northern cal. ... the
number of US HONE ids passed 30,000 in the late 70s). Part of this was
increasing use of HONE AIDS in sales&marketing support ... including
not being able to even submit mainframe orders that hadn't been
processed by various HONE applications.

Has there been a change in US banking regulations recently?

On 08/13/2010 02:12 PM, Jon Callas wrote:
Possibly it's related to PCI DSS and other work that BITS has been
doing. Also, if one major player cleans up their act and sings about
how cool they are, then that can cause the ice to break.

Another possibility is that a number of people in financials have
been able to get security funding despite the banking disasters
because the risk managers know that the last thing they need is a
security brouhaha while they are partially owned by government and
thus voters.

I bet on synergies between both.

If I were a CSO at a bank, I might encourage a colleague to make a
presentation about how their security cleanups position them to get
an advantage at getting out from under the thumb of the feds over
their competitors. Then I would make sure the finance guys got a
leaked copy.

Jon

the original requirement for SSL deployment was that it was on from
the original URL entered by the user. The drop-back to using SSL for
only small subset ... was based on computational load caused by SSL
cryptography .... in the online merchant scenario, it cut thruput by
90-95%; alternative to handle the online merchant scenario for total
user interaction would have required increasing the number of servers
by factor of 10-20.

One possibility is that the institution has increased the server
capacity ... and/or added specific hardware to handle the
cryptographic load.

A lot of banking websites are not RYO (roll-your-own), internally
developed ... but stuff they buy from vendor and/or have the website
wholly outsourced.

Also some number of large institutions have their websites outsourced
to vendors with large replicated sites at multiple places in the world
... and users interaction gets redirected to the closest server
farm. I've noticed this periodically when the server farm domain name
and/or server farm SSL certificate bleeds thru ... because of some
sort of configuration and/or operational problems (rather than seeing
the institution SSL certificate that I thot I was talking to).

Another possibility is that the vendor product that they may be using
for the website and/or the outsourcer that is being used ... has
somehow been upgraded (software &/or hardware).

--
virtualization experience starting Jan1968, online at home since Mar1970

... original design/implementation. The very first commerce server
implementation by the small client/server startup (that had also
invented "SSL") ... was mall paradigm, development underwritten by
large telco (they were looking at being major outsourcer of electronic
commerce servers) ... then the individual store implementation was
developed.

we had previously worked with two people responsible for commerce
server (at small client/server startup) on ha/cmp ... they are
mentioned in this old posting about jan92 meeting in ellison's
conference room
http://www.garlic.com/~lynn/95.html#13

they then left to join the small client/server startup ... and we also leave
what we had been doing. we then get brought in as consultants because they
want to do payment transactions on their server ... wanting to use this
technology called "SSL" that had been invented at the startup. We have to
go thru the steps of mapping the technology to payment business
processes ... including backend use involving the interaction between commerce
servers and the payment gateway; the payment gateway sitting on
the internet and interface to acquiring network backends ... misc. past
posts mentioning payment gateway
http://www.garlic.com/~lynn/subnetwork.html#gateway

approx. in the same era, but not exactly the same time (when
webservers were seeing the ssl cryptographic load & dropping back to
only using it for payment) ... some of the larger websites were
starting to first see a "plain" tcp/ip scaleup issue ... having to do
with tcp being originally designed as session protocol ... and was
effectively being misused by HTTP. As a result most vendor
implementations hadn't optimized session termination ... which was
viewed as infrequent event (up until HTTP). There was six month period
or so ... that the large websites saw their processors spending 90-95%
of the cpu running the FINWAIT list (as part of session termination).

The small client/server startup was also seeing (other) scaleup
problems in their server platforms used for downloading products
(especially browser product download activity) ... and in constant
cycle of adding servers. This was before rotating front-ends ... so
users were asked to manually specify URL of specific server.

Their problem somewhat cleared up when they installed a large sequent
box ... both because of the raw power of the sequent server ... and
also because sequent claimed to have addressed the session terminate
efficiency sometime previously (related to commercial unix accounts
with 20,000 concurrent telnet sessions).

For other topic drift ... I believe the first rotating, load-balancing
front-ends was with custom modified software for routers at google.

--
virtualization experience starting Jan1968, online at home since Mar1970

... also mentions mainframe finally getting ESCON at 10MB/s (about the
time we were doing FCS for 1Gb/s dual-simplex, aka concurrent 1Gb/s in
each direction ... mainframe flavor of FCS with bunch of stuff layered
ontop, was eventually announced as FICON). misc. old email from
fall of 91 & early 92 related to using FCS for cluster scaleup
http://www.garlic.com/~lynn/lhwemail.html#medusa

was the 100MB/s or so transfer (16 tracks in parallel) ... 3090 had to
do some unnatural acts to connect 100MB/s HiPPI channel interface
(lots of problems between MVS unable to support non-CKD ... and
mainframe difficulty supporting 100MB/s and higher transfer rates).

the article also mentions use of EVE ... either the engineering
verification engine or the endicott verification engine ... depending on
who you were talking to. EVE packaging violated standard product floor
loading and weight guidelines ... for customer products (but they
weren't sold to customers). San Jose got an EVE in the period of the
earthquake retrofit of disk engineering bldg. 14 ... while engineering
was temporarily housed in an offsite bldg.

The san jose EVE (in offsite bldg) as well as los gatos LSM was used in
RIOS chipset design (part of the credit bringing in RIOS chipset a year
early went to use of EVE and LSM).

There was HSDT 7m satellite dish in austin (where RIOS chip design went
on, austin had greater rainfade and angle thru atmosphere to the bird in
the sky ... eventually announced as power & rs6000) ... and HSDT 4.5m
dish in los gatos lab parking lot. That got chip designs between austin
and LSM in the los gatos lab. Los Gatos lab had T3 microwave digital
radio to roof of bldg. 12 on main plant site ... and then link from
bldg. 12 to temporary offsite engineering lab (got rios chip design from
austin to the san jose EVE).

--
virtualization experience starting Jan1968, online at home since Mar1970

in '95 there were a number of presentations by consumer dail-up online
banking operations (which had been around for nearly a decade) about
moving to the internet ... biggest reason was being able to offload
significant costs of the proprietary dial-up infrastructures to ISPs
(who could amortize it across a number of different application
environments/purposes). at the same time, the commercial/business
dial-up online banking operations were making presentations that they
would never move to the internet because of the significant security
and fraud problems (that continue on today).

somewhat analogous to the proprietary dial-up online banking ... there
were a number of proprietary VANs (value added networks) that grew up
during the 70s & 80s ... all of which were made obsolete by the
pervasive growth of the internet (although a few are still trying to
hang on).

part of the uptake of the internet in the mid-90s was the appearance
of browser technology which greatly simplified moving existing online
applications to browser/internet based operation i.e. internet had
been around since great cut-over from arpanet to internet on 1jan83
... although with some number of restrictions on commercial use
... until spread of CIX in early 90s.

--
virtualization experience starting Jan1968, online at home since Mar1970

in the 90s, ibm had 256-core (single core/chip) unix server when it
bought sequent which had numa-q ... a 256 processor unix server using
SCI and intel processors. the equivalent at the time to multicore
chips was multiple single core chips on a board.

one of the reasons that SIE instruction was so slow on the 3081 ... was
that the service processor had to page some of the microcode in from
3310 FBA disk. things got faster on 3090 with SIE microcode resident
... and a lot more virtualization hardware support ... eventually
expanded to PR/SM (on 3090 ... compete with amdahl hypervisor; basis for
current LPAR ... effectively subset of virtual machine function).

presumably part of the implementation trade-off for the 3081 was that
vmtool (internal high-end virtual machine & used sie) was purely planned
on being used for mvx/xa development (and never planned on shipping as
product). in that environment there was less of concern of production
use of vmtool/sie and therefor less of performance issue (at least
involving doing lots of switching between virtual machines and therefor
SIE execution happened relatively infrequently).

--
virtualization experience starting Jan1968, online at home since Mar1970

On 08/17/2010 06:16 PM, David G. Koontz wrote:
Privacy against whom? There were enough details revealed about the key
escrow LEAF in Clipper to see that the operation derived from over the air
transfer of keys in Type I applications. The purpose was to keep a back
door private for use of the government. The escrow mechanism an involution
of PKI.

There were of course concerns as evinced in the hearing under the 105th
Congress on 'Privacy in the Digital Age: Encryption and Mandatory Access
Hearings', before the Subcommittee on the Constitution, Federalism, and
Property Rights, of the Committee on The Judiciary, United States Senate in
March 1998. These concerns were on the rights of privacy for users.

Clipper failed primarily because there wasn't enough trust that the privacy
wouldn't be confined to escrow agents authorized by the Judiciary. The
Federal government lost credibility through orchestrated actions by those
with conscience concerned over personal privacy and potential government abuse.

Privacy suffers from lack of legislation and is only taken serious when the
threat is pervasive and the voters are up in arms.

some of the organizations were heavily involved in privacy and had
done in-depth consumer surveys ... and the number one privacy issue
was "identity theft", particularly the kind called "account fraud"
... particularly as a result of data breaches. since there seemed to
be little or nothing being done about data breaches i.e. the fraud
doesn't occur against the parties responsible for the repositories
... but against the consumers whose information is in the repositories
(aka it is much easier to get parties to do something about security
when they are both responsible and at risk). In any case, it
apparently was hoped that the data breach notifications might motivate
the parties holding the data to improve the security around the
repositories (this could be considered to be a case of "mis-aligned
business process" brought up in the financial mess congressional
hearings ... i.e. regulation is much harder when the parties aren't
otherwise motivated to do the right thing).

Since the cal. data breach legislation, there have been a number of
federal bills introduced on the subject. one group have tended to be
similar to the cal. legislation ... but frequently there have been
competing bills introduced at the same time that basically are data
breach notification ... that pre-empts the cal. legislation and
eliminates most notification requirements.

The organizations responsible for the cal. data breach
notification legislation where also working on a personal information
opt-in bill about the same time ... when GLBA (the legislation that
also repealed
Glass-Steagall from the 30s and one of the factors in the
current mess) added federal pre-emption opt-out ... allowing
institutions to share personal information unless there was opt-out
request (pre-empting cal. work on requiring that personal information
could only be shared when there was the individual's opt-in).

A few years ago at the annual privacy conference in DC, there was a
session with panel of FTC commissioners. Somebody in the back of the
room got up and claimed to be involved in many of the call-centers for
the financial industry. The person claimed that the opt-out
call-lines at these centers had no provisions for recording any
information about individuals calling in (requesting opt-out of
personal information sharing). The person then asked if any of the
FTC commissioners might consider looking into the problem (later the
whole conference adjourned to the spy museum down the street which had
been taken over to celebrate the retirement of one of the FTC
commissioners).

Note there was a key escrow business organization (that was
supposedly business sponsored). The scenario in the meetings was that
there was a disaster/recovery, no-single-point-of-failure, requirement
for escrowing keys used to encrypt critical business data.

On the business side ... this would only require escrow of keys used
for encrypting business critical data. The govs. participation was
that they could have court orders to gain access to the keys.
However, there seemed to have been some gov. implicit assumption that
the business key escrow facilities would actually be escrowing all
encryption keys ... not just the keys used for escrowing business
critical data (needed for disaster/recovery &
no-single-point-of-failure scenarios)

More recently I was co-author of financial x9.99 privacy standard.
Part of this included looking at how HIPAA/medical information might
be able to leak in financial statement. However one of the things I
grappled with was getting across the concept that protecting
individual personal information could require the institution's
security dept. to protect the personal information from the activities
of the institution (security scenarios more frequently have
institution assets being protecting from criminals ... as opposed to
individual assets being protected from the institution)

the biggest issues in all the privacy surveys and data breach
notification stuff was that criminals being able to use the leaked
information for performing fraudulent financial transactions against
the consumer's account (sort of a class of replay-attack; number 2
issue was institutions sharing personal information). One of the
things that we had done earlier in the x9.59 financial standard was to
slightly tweak the paradigm and eliminate the replay-attack
vulnerability. X9.59 did nothing to "hide" the information ... it
just eliminated crooks being able to use the information (leaked from
evesdropping, skimming, data breaches, etc) for fraudulent
transactions (x9.59 eliminating the mis-aligned business process where
institutions needed to hide consumer information ... where the
information leaking represented no <direct> risk to the
institution).
http://www.garlic.com/~lynn/x959.html#x959

--
virtualization experience starting Jan1968, online at home since Mar1970

Has there been a change in US banking regulations recently

On 08/17/2010 09:52 PM, James A. Donald wrote:
For sufficiently strong security, ECC beats factoring,
but how strong is sufficiently strong? Do you have any data?
At what point is ECC faster for the same security?

in 90s, one of the scenarios for RSA & hardware tokens was that the
tokens had extremely poor random numbers. ec/dsa standard required
random number as part of signature. RSA alternative had alternative
mechanisms not needing random number capability in the token. Possibly
that was one scenario for RSA because could work w/o random number
could work both with tokens and non-tokens.

However, RSA was extremely compute intensive in tokens ... even with
contact token drawing enormous power ... it still took a long time.
One avenue was adding enormous number of circuits to the chip to do
RSA computation in parallel. However, that significantly drove up the
power requirement ... further limiting implementations to contact
based operations (to obtain sufficient power to perform the
operations).

Somewhat as a result of having working on what is now frequently
called "electronic commerce", in the mid-90s, we were asked to
participate in x9a10 financial standard working group ... which had
been given the requirement to preserve the integrity of the financial
infrastructure for ALL retail payments (debit, credit, stored value,
high-value, low-value, contact, contactless, internet, point-of-sale,
attended, unattended, transit turnstile, aka ALL)

As part of looking at ALL, was looking at the whole hardware token
infrastructure (that could also meet ALL requirement). Meetings with
the transit industry had requirement that transaction had to be
contactless (i.e. some form of iso14443), work within couple inches of
reader, and perform the operation within 1/10th of second.

Much of x9.59 digitally signed transaction was light-weight enuf to
(potentially) work within the constraints of transit turnstile
requirement (as all well as meet all the x9a10 requirements) given a
digital signature technology that had sufficient high level of
integrity (& security strength for high-value transaction) but
required token implementation that could also meet the
transit-turnstile requirement.
http://www.garlic.com/~lynn/x959.html#x959

RSA token solutions had tried to shorten the number of seconds (but
never getting to subsecond) by adding circuits & power requirements
(precluding transit turnstile) ... pretty much further locking it into
contact mode of operation.

So we sent out all the requirements ... and got back some number of
responses about how to meet them. Basically (in late 90s) there were
tokens that could do ec/dsa within the power requirements of iso14443
and transit-turnstile elapsed time ... given that their random number
capability could be beefed up to not put things at risk. This was
using relatively off the shelf chip (several hundred thousand
circuits) with only minor tweak here and there (not the massive
additional circuits that some vendors were adding to token chips
attempting to reduce RSA elapsed time ... but also drastically driving
up power requirement). We had several wafers of these chips produced
(at a security fab) and used several of EAL certification.

One of the issues in EAL certification was that since the crypto was
part of the chip ... it was included in the certification (there are
some chips out there with higher certification because the get it on
the bare bones silicon ... and then add things afterwards like
applications & crypto that aren't part of the certification). To get
higher than an EAL4+ certification required higher level of evaluation
of the ec/dsa implementation ... but during the process, NIST had
published and then withdrew ec/dsa evaluation reference.

Also, part of the semi-custom chip was adding some minor tweaks to the
silicon which eliminated some number of process steps in the fab
... as well as post-fab token processing ... significantly reducing
the overall costs for basic product delivery.
http://www.garlic.com/~lynn/x959.html#aads

There was also preliminary design for a fully custom chip doing all
the operations ... first cut was approx. 40,000 circuits (as opposed
to several hundred thousand circuits for relatively off-the-shelf,
semi-custom chip) ... full ec/dsa being able to handle the x9.59
transaction within the transit turnstile power & elapsed time
constraint(power draw per unit time approx. proportional to total
number of circuits). The fully custom design further reduced the power
draw w/o sacrificing ec/dsa elapsed time.

In the high-value case ... the token had to be relatively
high-integrity. In the transit-turnstile requirements it had to be
able to do the operation within the very low power constraints of
contactless turnstile operation as well as being in very small
subsecond elapsed time range (the very low power constraints
eliminated the RSA-based approaches that significantly increased the
number of circuits powered in parallel as solution to modestly
reducing the elapsed time) ... aka there was both a very tight power
efficiency requirement as well as a very tight elapsed time
requirement for performing the digital signature operation. EC/DSA
could be done within the power efficiency constraints, elapsed time
constraints, security integrity constraints, but had a requirement for
very high integrity random number generation.

--
virtualization experience starting Jan1968, online at home since Mar1970

memes in infosec IV - turn off HTTP, a small step towards "only one mode"

an original HTTPS deployment requirement was that the end-user
understand the relationship between the webserver they thot they were
talking to and the corresponding (HTTPS) URL that they
supplied. Another requirement was that ALL Certification Authorities
selling SSL/HTTPS domain name digital certificates operate at the same
(very high) level of integrity.

almost immediately, the original requirement was negated by merchant
servers that dropped back to using HTTP for the most of the online
experience (because HTTPS cut thruput by 90-95%) and restricting HTTPS
use to pay/checkout portion, accessed by a "pay/checkout" button
(supplied by the unvalidated website). Clicking on HTTPS URL
buttons/fields from unvalidated sources invalidates the original
requirement, since it creates a disconnect between the webserver the
user thinks they are talking to and the corresponding URL (that is
personally known to them). Since then, the use of "click-on" URLs have
proliferated widely resulting in users having little awareness of the
corresponding URL. The issue/short-coming is that browser HTTPS only
validates the equivalence between the webserver being talking and the
supplied URL (it does nothing to establish any equivalence between the
supplied URL and what the end-user believes that may URL represent).

In this environment, nearly anybody can buy a SSL domain name digital
certificate for a "front" website, induce the end-user to "click-on" a
field that claims to be some other website (which supplies their HTTPS
URL to the end-user browser), and perform a MITM-attack with a
modified PROXY server that establises a (separate) HTTPS connection to
claimed website. Their are a pair of HTTPS sessions with the
fraudulent website in the middle (MITM-attack) with the modified PROXY
providing the interchange between the two sessions (transparent to
most end-users).

With the proliferation of zombie machines and botnets, their could be
a sequence of paired sessions, so the valid website isn't seeing a
large number of different sessions originating from the same IP
address.

Of course, with the progress of zombie machine compromises, the attack
can also be performed with a compromise of the end-user's browser
(eliminating any requirement for intermediate PROXY server). The
latest sophistication of such compromises target very specific online
banking services ... performing fraudulent transactions and modifying
the results presented to the user (so that the fraudulent transactions
aren't displayed).

The compromise of the end-user machine was well recognized and
researched threat in the 90s and contributed to the EU FINREAD
standard in the later 90s. The EU FINREAD standard basically added a
separate hardened box as the end-point for financial operations, with
its own display and input. Each financial operation was (accurately)
displayed on the box and required explicit human interaction
(eliminating transactions w/o the user's knowledge or transactions
different than what the end-user believed).

The EU FINREAD standard ran afoul of some deployments of other add-on
"secure" boxes which were done with serial-port interfaces. The
difficult consumer problems with add-on serial-port attachments had
been well known since the days of dial-up online banking (before the
migration to internet). The enormous consumer problems with the
serial-port attachments led to quickly spreading opinion in the
industry that all add-on secure boxes were impractical for consumers
(when it actually was that add-on serial-port boxes were impractical
for general market ... which was also major motivation behind creation
of USB).

There is little evidence that internet in-transit evesdropping has
been any kind of significant threat (HTTPS encrypting information as
countermeasure to evesdropping information being transmitted). The
major exploits have been end-point attacks of one sort or another.

--
virtualization experience starting Jan1968, online at home since Mar1970

there was story they told when I was at boeing (I was brought in to
help setup BCS ... being among the first dozen bcs employees)
... about when the first 360s were announced ... boeing had studied
the announcement and walked in to the salesman and placed a really big
order ... the salesman hardly knew what it was they were talking
about. in anycase, the salesman's commission that year was bigger than
the CEO's compensation. it is then that the company changed from
commission plan to quota ... something like 20% base salary and 80% of
the salary dependent on meeting quota (reaching 150% of quota then is
".2 + 1.5*.80 = 1.4*base-salary").

next year, boeing ordered so much additional, that the salesman had
reached his quota by end of january (he then supposedly left and started
his own computer services company). quota plan was then enhanced so it
could be adjusted during the year (if the salesman was exceeding it by
any significant margin).

it makes sure that the only windfalls are for the ceo. much later I
sponsored col. boyd's (OODA-loop, patterns of conflict, organic design
for command and control, etc) briefings at IBM. One of Boyd's
biographies mentions he did a tour in charge of spook base (about the
same time I was at boeing), and spook base (largest computer complex in
SE asia) was a $2.5B windfall for IBM (at $2.5B, presumably also bigger
than the boeing renton datacenter). misc. posts mentioning boyd and/or
URLs mentioning boyd
http://www.garlic.com/~lynn/subboyd.html

towards https everywhere and strict transport security

On 08/22/2010 06:56 AM, Jakob Schlyter wrote:
There are a lot of work going on in this area, including how to use
secure DNS to associate the key that appears in a TLS server's
certificate with the the intended domain name [1]. Adding HSTS to
this mix does make sense and is something that is discussed, e.g. on
the keyassure mailing list [2].

There is large vested interest in Certification Authority industry
selling SSL domain name certificates. A secure DNS scenario is having
a public key registered at the time the domain name is registered ...
and then a different kind of TLS ... where the public key can be returned
piggy-backed on the "domain name to ip-address mapping" response.

This doesn't have the revenue infrastructure add-on that happened with
the Certifcation Authority ... just is bundled as part of the existing
DNS infrastructure. I've pontificated for years that it is catch-22
for the Certification Authority industry ... since there are aspects
of improving the integrity of the DNS infrastructure
i.e. Certification Authority industry is dependent on DNS ... aka The
Certification Authority industry has to match the information from the
SSL digital certificate applicant with the true owner of the domain
name on file with the DNS infrastructure (among other things,
requiring digitally signed communication that is authenticated with
the onfile public key in the domain name infrastructure is a
countermeasure to domain name hijacking ... which then cascades down
the trust chain, to hijackers applying for valid SSL domain name
certificates).
http://www.garlic.com/~lynn/subpubkey.html#catch22

At 50k foot level, SSL domain name certificates were countermeasures
to various perceived shortcomings in DNS integrity ... nearly any kind
of improvements in DNS integrity contributes to reducing the
motivation for SSL domain name certificates. Significantly improving
integrity of DNS would eliminate all motivation for SSL domain name
certificates. This would then adversely affect the revenue flow for
the Certification Authority industry.

I've also periodically claimed that OCSP appeared to be a (very
rube-goldberg) response to my position that digital certificates
(appended to every payment transaction) would actually set the
state-of-the-art back 30-40 yrs (as opposed to their claims that
appended digital certificates would bring payments into the modern era
... that was separate from the issue of the redundant and superfluous
digital certificates representing a factor of 100 times payment
transaction payload and processing bloat).
http://www.garlic.com/~lynn/subpubkey.html#bloat

Anything that appears to eliminate competition for paid-for SSL
digital certificates and/or strengthen the position of Certification
Authorities ... might be construed as having an industry profit
motivation.
http://www.garlic.com/~lynn/subpubkey.html#sslcerts

--
virtualization experience starting Jan1968, online at home since Mar1970

William Hamblen <william.hamblen@earthlink.net> writes:
It is a little complicated. Each state has an excise tax on fuel and
a registration tax on vehicles. There also is a federal excise tax.
Heavy vehicles have to be registered in the states they operate in.
The excise taxes and registration fees are apportioned among the states.
Road construction and maintenance costs are paid by the fuel and
registration taxes. The political bargain behind the excise tax is that
wear and tear on roads tends to be proportional to the amount of fuel
burned and the excise tax is a relatively fair way to share the burden.

we've had some past threads that there is negligible wear&tear on roads
by anything less than 18wheelers (as well as roads being designed based
for expected heavy truck axle-load traffic expected lifetime). straight
fuel based tax is effectively an enormous subsidy to heavy trucking
industry (since nearly all road construction and maintenance costs are
based on heavy truck useage ... with other traffic having negligible
effect ... but the road costs are spread across all traffic).

some conjecture if costs were accurately apportioned ... fuel tax for
road build/maint would be totally eliminated for all but 18-wheeler
heavy trucks ... and only charged for heavy truck fuel usage (to recover
equivalent total revenue likely would drive heavy truck fuel consumption
tax to several tens of dollars per gal.)

old thread with reference to cal state road design based on heavy truck
equavalent single axle loads (ESALs) ... The effects on pavement life
of passenger cars, pickups, and two-axle trucks are considered to be
negligible (url went 404 but lives on at wayback machine):
http://www.garlic.com/~lynn/2002j.html#41 Transportation

from above:
In the aftermath, shareholders at several companies have demanded and
been given a say on executive pay in hopes of preventing such
excess. HP, Apple, Microsoft, Cisco and Intel are just a handful of
them.

... snip ...

there was report that during the past decade (and financial mess
period), the ratio of executive-to-worker compensation had exploded to
400:1 (after having been 20:1 for a long time and 10:1 for most of the
rest of the world).

part of this was public companies filing fraudulent financial reports to
boost executive pay (even after sarbanes-oxley). GAO possibly thot that
SEC wasn't doing anything about fraudulent financial reporting ...
GAO started publishing reports about financial reporting that it
believed to be fraudulent and/or be in error (in some cases, the filings
were later corrected ... but executive bonuses weren't correspondingly
adjusted).

There was apparently long list of things that SEC wasn't doing anything
about ... which was repeated theme by person testifying that they had
tried for decade to get SEC to do something about Madoff. There was also
claim that large numbers regularly practiced illegal, naked, short
selling and believed there was little or no chance of being caught,
since nobody at SEC understand what was going on.

32nd AADS Patent, 24Aug2010

The original patent work had nearly 50 done and the patent attorneys
said that it would be over 100 before everything was finished. Then
somebody looked at the cost for filing 100+ patents worldwide and
directed that all the claims be packaged into 9 patents for
filing. Later the patent office came back and said they were getting
tired of enormous patents where the filing fee didn't even cover the
cost to read all the claims (and the claims needed repackaging into at
least 25 patents)

we were then made offer to move on ... so that subsequent patent
activity has been occuring w/o our involvement.

--
virtualization experience starting Jan1968, online at home since Mar1970

Breaches have a much higher fraud ROI for the crooks ... number of
accounts compromised per amount of effort. We were tangentially
involved in the cal. data breach legislation; we had been brought in
to help wordsmith the cal. electronic signature legislation and
several of the participants were heavily into privacy issues
... having done detailed, in-depth consumer privacy issues.
http://www.garlic.com/~lynn/subpubkey.html#signature

The number one issue was account fraud form of identity theft and
major portion was data breaches. It seemed that little or nothing was
being done about such breaches ... so the result being the cal. data
breach notification (hoping that the publicity might prompt corrective
action and countermeasures). some issues associated with breaches:

1) aligned business processes merchants and transaction processors
have to protect consumer information. in most security scenarios,
there is significant larger motivation to secure assets when exploits
result in fraud against the institution (trying to protect the
assets). when resulting fraud from exploits is against others
(consumers), there is much lower motivation to secure such assets
(before breach notification, no direct loss to the merchants and
transaction processors).

2) security proportional to risk the value of the transaction
information to the merchant is the profit from the transaction
... possibly a couple dollars (and possibly a few cents per acccount
transaction to the transaction processor). the value of the same
information to the attackers/crooks is the account balance or credit
limit ... potentially greater than 100 times more valuable. As a result,
the attackers may be able to outspend the defenders by a factor of 100
times.

3) dual-use vulnerability the account transaction information is
needed in dozens of business processes at millions of locations around
the world (requiring it to be generally available). at the same time,
crooks can use the information for fraudulent transactions
... implying that the information in transactions has to be kept
confidential and never divulged (not even to merchants where the
information is required for normal business processes). This results
in diametrically opposing requirements for the same information.

In general, the current paradigm has several "mis-aligned" (security)
business processes involving the information used in transactions. in
the mid-90s, we were asked to participate in the x9a10 financial
standard working group which had been given the requirement to
preserve the financial infrastructure for ALL retail
payments. The resulting x9.59 financial transaction standard
slightly tweaked the paradigm eliminating all the above threats and
vulnerabilities (including waitresses copying the information).
http://www.garlic.com/~lynn/x959.html#x959

--
virtualization experience starting Jan1968, online at home since Mar1970

Worried About ID Theft? Join the Club; Two-thirds of Americans fret
about online security but it doesn't stop them from catching viruses
http://www.networkworld.com/news/2010/093010-feds-hit-zeus-group-but.html

x9.59 financial transaction standard introduced unique information for
each transaction (that couldn't be reused by crooks for performing
fraudulent financial transactions) which could be packaged in the form
of a hardware token (unique, physical something you have
authentication).

About the same time as the x9.59 (and x9a10 financial work) in the
mid-90s, there was detailed looks at PC compromises ... virus &/or
trojans infecting a machine that could impersonate a real live human
(including keystrokes and/or mouse actions). This was somewhat the
scenario from the mid-90s that the commercial/business dialup online
banking operations claiming that they would NEVER move to the
internet. The most recent genre of such compromises have detailed
sophisticated knowledge regarding online banking operations, can
perform fraudulent financial transactions (impersonating the real live
human) and modify display of online banking transactions to hide
evidence that the fraudulent transactions have occurred.

The EU FINREAD standard was developed in the 90s as countermeasure to
compromised PCs (virus/trojan that could impersonate the owner's
physical operations) ... basically a hardened external box that
required real human physical interaction for each transaction
(eliminating ability for virus/trojan impersonations) and an
independent display (eliminating virus/trojan displaying a transaction
different than the one to be executed).
http://www.garlic.com/~lynn/subintegrity.html#finread

--
virtualization experience starting Jan1968, online at home since Mar1970

In 1968 my PPOE decided to stick its academic toe into the world of remote
terminals connected to a mainframe. Four 2740 (not 2741) terminals were
delivered as were the Bell 103A2 modems, but for some reason the IBM 2701 TP
controller was delayed (this was long before I was involved in planning so I
don't know the reason for the delay.)

I didn't get 2741 at home until mar70 ... before that the only access I
had was terminal in the office.

at the univ, started out with 2741s ... had been delivered as part of
360/67 planning on use with tss/360. that never really materialized
because of all sorts of problems with tss/360. there was 2702
telecommunication controller. some people from science center came out
and installed cp67 in jan68 and I got to start playing with it on
weekends ... never really got to the place were it was general
production operation. when the univ. got some teletypes ... I had to
add tty terminal support to cp67.

cp67 came with 1052 & 2741 terminal support and had some tricky code
that attempted various terminal sequences to dynamically determine
terminal type. the 2702 implemented a specific line-scanner for each
terminal type and had a "SAD" command that could dynamically switch
which line-scanners were associated with each port (set the port to one
line-scanner, do some sequences and see if there was no error, otherwise
reset the line-scanner and try again). I did hack to cp67 for tty/ascii
support that extended the process to tty/ascii terminals (three
different kinds of line-scanners & terminal types to try).

there was a telco box installed that had a whole bunch of in-coming
dialup lines and the box could be configured that an incoming call on a
busy line, the box would hunt for different available free line (fading
memory, i think "hunt" group?). The result was a single number could be
published for dialup access ... and same number could be used for pool
of incoming lines.

Initial objective was to have a single incoming phone number published
for all terminal types (single pool/hunt group?), relying on the dynamic
terminal type identification code. However, the 2702 implementation was
quite so robust ... while any line-scanner could be dynamically
associated with port ... they had taken a short-cut and hardwired
line-speed oscillator to each port (would dynamically switch
line-scanner for each port but not the line-speed ... and there was a
direct physical wire between telco box for each phone number and the
2702 ports). As a result, it required a different pool of numbers/lines
(and published dial-in phone number) for 1052/2741 and tty/ascii that
corresponded to the different port line-speeds on the 2702.

somewhat as result, the univ. started a project to build clone
telecommunication controller ... started with an interdata/3, reverse
engineer the 360 channel interface and building a channel interface
board for the interdata/3. The interdata/3 would emulate 2702 functions
with the added enhancement that terminal line-speed determination was in
software. This evolved into an interdata/4 (for the mainframe channel
interface) with a cluster of interdata/3s dedicated for port interfaces.
Interdata took the implementation and sold it as a product (and four of
us got written up as responsible for clone controller business). Later
Perkin/Elmer bought Interdata and the box was sold under the
Perkin/Elmer name. In the late 90s, I was visiting a large merchant
acquiring datacenter (i.e. handled incoming credit card transactions
from point-of-sale terminals for a significant percent of the merchants
in the US) and they had one such perkin/elmer box handling incoming
point-of-sale terminal calls.

I haven't yet found any pictures of the 2741 at home ... but do have
some later pictures of 300baud cdi miniterm that replaced the 2741
(although the following does have pictures of 2741 APL typeball that I
kept):
http://www.garlic.com/~lynn/lhwemail.html#oldpicts

--
virtualization experience starting Jan1968, online at home since Mar1970

"Keith F. Lynch" <kfl@KeithLynch.net> writes:
A policy of bailing out firms too big to fail harms small
businesses. Especially since the tax money used for those bailouts
partly comes from small businesses.

there was some theory that allowing companies to become too big to
fail enabled them to become more competitive & efficient. eventually
that policy allowed much of the country's economy to be channeled thru
such operations. when the bubble burst and the mess hit ... letting
those companies fail would have had significant immediate pain &
adjustment ... but possibly not as much (long-term) as letting them
continue to operate.

i've mentioned in the past looking at a periodic financial industry
publication that presented hundreds of pages of operating numbers
arrainged in two columns ... one column for the avg numbers of the
largest national banks compared to column for the avg numbers of the
largest regional banks. the largest regional banks were actually
slightly more efficient in number of things compared to the largest
national banks ... an indication that the justification allowing too
big to fail institutions was not valid. Other indications are that they
have used their too big to fail status to perpetrate inefficient and
possibly corrupt operation.

this is somewhat related to recent post about explosion in the ratio of
executive to employee compensation exploding to 400:1 (after having been
20:1 for a long time and 10:1 in most of the rest of the world).
http://www.garlic.com/~lynn/2010m.html#62 Dodd-Frank Act Makes CEO-Worker Pay Gap Subject to Disclosure

recent reference to feds following money trail, used to buy drug
smuggling planes, back to some too big to fail banks. apparently
because they were too big to fail (feds leaning over backwards to do
everything to keep them operating), rather than prosecuting, throwing
the executives in jail and shutting down the institutions ... they
asked the institutions to stop the illegal money laundering.
http://www.garlic.com/~lynn/2010m.html#24 Little-Noted, Prepaid Rules Would Cover Non-Banks As Wells As Banks

--
virtualization experience starting Jan1968, online at home since Mar1970

z/VM LISTSERV Query

gahenke@GMAIL.COM (George Henke) writes:
Does anyone know a good LISTSERV for z/VM?

do you want one that runs on z/VM or one about z/VM?

the original was implementated on VM in the mid-80s as part of EARN
(european flavor of bitnet) ... aka the "bit.listerv" part of
usenet are the original bitnet/earn mailing lists gatewayed
to internet. misc. past posts mentioning bitnet/earn
http://www.garlic.com/~lynn/subnetwork.html#bitnet

listserv was sort of a subset of the internal TOOLSRUN that ran on VM
internally from the early 80s ... somewhat outcome of corporate
executives becoming aware of online computer conferencing on the
internal network ... something that I got blamed for in the late 70s and
early 80s. ... misc. past posts mentioning internal network
http://www.garlic.com/~lynn/subnetwork.html#internalnet

On 08/25/2010 09:04 AM, Richard Salz wrote:
Also, note that HSTS is presently specific to HTTP. One could imagine
expressing a more generic "STS" policy for an entire site

A really knowledgeable net-head told me the other day that the problem
with SSL/TLS is that it has too many round-trips. In fact, the RTT costs
are now more prohibitive than the crypto costs. I was quite surprised to
hear this; he was stunned to find it out.

Look at the "tlsnextprotonec" IETF draft, the Google involvement in SPDY,
and perhaps this message as a jumping-off point for both:
http://web.archiveorange.com/archive/v/c2Jaqz6aELyC8Ec4SrLY

I was happy to see that the interest is in piggy-backing, not in changing
SSL/TLS.

then SSL theoretically being stateless on top of tcp added a whole
bunch of additional chatter. there has frequently between changing
trade-offs between transmission and processing ... but SSL started out
being excessive in both transmission and processing (in addition to
having deployment requirement that the user understand the
relationship between the website they believed they were talking to
and the URL they had to supply to the browser .... a requirement that
was almost immediately violated).

my pitch forever has been to leverage key distribution piggy-backed on
domain name to ip-address (dns) response ... and use that to do
encrypted/validated reliable transaction within HSP 3-packet minimum
exchange.

Win 3.11 on Broadband

"Joe Morris" <j.c.morris@verizon.net> writes:
"Hunt group" is the correct term, but I doubt that it was implemented in CPE
at the data center. It would more likely have been implemented in the CO
unless the customer had its own telephone system, in which case it would
have been implemented in the corporate telephone switch.

Of course, with more recent technology it's possible even on small customer
premises to do all sorts of things today that once were done only by Ma Bell
in her kitchen with big switchframes and big $$$. (Asterisk, anyone?)

Andrew Swallow <am.swallow@btopenworld.com> writes:
So this is a human problem. Human problems can be solved by removing
the problem humans.

When a too big to fail bank hits problems the government can fire the
directors. Any shadow directors who may be hiding some where should
also be from frog marched out of the building. Recruit some
replacement directors and instruct them to sort out the problems.
Warn the recruits and that if they fail they will be frog marched out.

In the fall2008 congressional hearings into the financial mess the
term mis-aligned business process came up several times
... particularly in reference to the rating agencies. The issue was
that with the sellers paying for the ratings on the toxic CDOs
... the people at the rating agencies were incented to give the
"triple-A" ratings, asked for by the sellers (in effect, the sellers
were able to buy whatever ratings they wanted).

this played a significant role in the financial mess since it provided
nearly unlimited funds to the unregulated loan originators ... and
allowed them to unload all loans ... regardless of risk, borrowers
qualifications and/or loan quality ... at premium price (something that
might have been a tens of billion dollar problem ballooned to tens of
trillion dollar problem ... brings in the recent reference that
magnitude can matter).

it reminds me of a industrial espionage, trade-secret theft case
involving the disk division circa 1980. the claim was for billions of
dollars (basically difference in clone manufacturer being able to have
clone ready to ship on day of announcement ... or 6+ month delay that
it would require to come up with a clone thru reverse engineering
process).

judge made some ruling about security proportional to risk ... any
information valued at billions of dollars ... the company had to
demonstrate processes in place to deter employees from walking away with
such information (security processes that were in proportion to the
value of the information). basically people can't be assumed to be
honest in the face of temptation of that magnitude (baby proof the
environment ... somewhat analogous to requirement for fences around
swimming pools ... since children can't be held responsible for wanting
to drown themselves; given sufficient temptation, adults can't be held
responsible for wanting to steal something).

in any case, the financial mess scenario subtheme was that it is much
easier to regulate and provide effective control when the business
processes are "aligned" and people are motivated to do the right thing.
Regulation and control can become nearly impossible when the business
processes are mis-aligned and people have strong motivations to do the
wrong thing.

Andrew Swallow <am.swallow@btopenworld.com> writes:
So this is a human problem. Human problems can be solved by removing
the problem humans.

When a too big to fail bank hits problems the government can fire the
directors. Any shadow directors who may be hiding some where should
also be from frog marched out of the building. Recruit some
replacement directors and instruct them to sort out the problems.
Warn the recruits and that if they fail they will be frog marched out.

this also came up in the economists discussion about congress being the
most corrupt institution on earth ... and change to "flat rate" tax
could go a long way to correcting the situation.

the scenario is that a majority of lobbying and associated huge sums of
money are related to getting special tax breaks. all that goes away with
flat-rate.

the other points they made ...

was the current infrastructure has resulted in 65,000+ page tax code
... and dealing with that level of complexity costs the country between
3 and 6 percent of GDP (i.e. productivity lost to dealing with
complexity of all the tax code provisions). the claim is that "flat
rate" tax would reduce that to 400-500 pages (freeing up the enormous
resources currently involved in dealing with tax code for more
productive activities)

also the current infrastructure results in businesses making non-optimal
business decisions ... reducing the competitive position of the
country's businesses.

... besides eliminating the motivation behind the majority of existing
corruption and lobbying that goes on.

shmuel+ibm-main@PATRIOT.NET (Shmuel Metz , Seymour J.) writes:
The term microcoded is normally used when the simulated architecture
is different from the underlying architecture, e.g., 108-bit wide
control words simulating the 370 instruction set on 1 3168.

the high-end had horizontal microcode ... i.e. wide-words with various
bits for doing various operations. coders needed to be aware of things
like machine cycles to fetch an operand ... since starting fetch was
different than using operand. made for higher level of concurrency under
programmer control ... but there was much larger number of things that
the programmer had to be keep track of (making programming & development
much more difficult). efficiency of horizontal microcode was usually
measured in avg. machine cycles per 370 instruction (while it was still
greater than one ... with a single horizontal microcode instruction
being executed per machine cycle ... potentially doing several things
simultaneously ... but also potentially just idle ... while it waited
for various things).

while 3033 started out being 168 wiring diagram mapped to 20% faster
chips (the chips also had ten times as many circuits/chip ... but the
extra circuits started out going unused). during 3033 development
various things were tweaked getting thruput up to 1.5times 168 (and also
reducing the machine cycles per 370 instruction). one of the issues on
high-end machines was that methodologies like ECPS resulted in little
benefit (since there wasn't long sequences of 370 instructions being
replaced with microcode instructions running ten times faster). In that
sense ... the high-end horizontal microcode machines ran 370 much closer
to native machine thruput (w/o the 10:1 slowdown seen with vertical
microcode ... this is analogous to various mainframe emulators running
on intel platforms ... so there wasn't any corresponding speedup with
moving from 370 to microcode).

in the 3090 time-frame ... SIE got big speed up compared to 3081 ...
since there wasn't enough room for SIE microcode on the 3081 ... and it
had to be paged in from 3310/FBA. in that time-frame Amdahl came out
with the hypervisor (basically virtual machine subset built as part of
the machine, not requiring a vm/370 operating system) which was done in
what they called macrocode ... a 370 subset ... that could be executed
more efficiently than standard rules for 370 instruction execution. The
3090 eventually responded with PR/SM (basis for current day LPAR)
... but it took much longer and involved much greater development effort
since it all had to be done in horizontal microcode.

SIE and other virtual machine assists were different than the 10:1
speedup from ECPS. Basically significant number of privileged
instructions were enhanced to recognize 3-modes of execution, normal 370
supervisor mode, normal 370 problem mode (generate privileged
interrupt), and 370 virtual machine supervisor mode. Now the execution
execution was different in 370 supervisor mode and 370 virtual machine
supervisor mode. The speedup didn't come from replacing VM/370 kernel
instructions with microcode instructions ... a big part came from
eliminating the interrupt into the vm/370 kernel and the associated task
switch (and change in cache contents) ... and then subsequent switching
back to the virtual machine.

I had given a talk on how we did ECPS in the mid-70s for 138/148 at
(vm370) baybunch user group meeting (held monthly at SLAC in Palo alto),
which was attended by a number of people from Amdahl (various vendors
from the bay area were regulars at baybunch). Later they came back and
said that the amdahl hypervisor enhancements didn't show as much speedup
as might be indicated by my (10:1) ECPS talk. However, we had to have
the discussions that the machine implementations were totally different
(horizontal microcode versus vertical microcode) and the kinds of things
done in the hypervisor was different than what was being done in ECPS
(i.e. the speedup for ECPS was only possible if there was 10:1
difference between the native microprocessor and 370 ... which was true
for the high-end machines ... executing 370 at much closer to native
machine).

one of the things ran into the (ISO chartered) ANSI X3S3.3
(responsible for standards related to OSI level3 & level4) meetings
with regard to standardization of HSP (high speed protocol) ... was
that ISO had policy that it wouldn't do standardization on things that
violated OSI model.

2) supported internetworking ... which doesn't exist in OSI model ...
would set in non-existing layer between level3 & level4

3) went directly to MAC interface ... which doesn't exist in OSI mdoel
... something that sits approx. in the middle of layer3 (above link
layer and includes some amount of network layer).

In the IETF meetings at the time of original SSL/TLS ... my view was
that ipsec wasn't gaining tranction because it required replacing
parts of tcp/ip kernel stack (upgrading all the kernels in the world
was much more expensive then than it is now). That year two things
side-stepped the ipsec upfront kernel stack replacement problem

• SSL ... which could be deployed as part of the application w/o
requiring changes to existing infrastructure

• VPN ... introduced in gateway sesssion at fall94 IETF
meeting. This was implemented in gateway routers w/o requiring any
changes to existing endpoints. My perception was that it upset the
ipsec until they started referring to VPN as lightweight ipsec (but
that opened things for ipsec to be called heavyweight ipsec). There
was a problem with two classes of router/gateway vendors ... those
with processors that could handle the (VPN) crypto load and those that
had processors that couldn't handle the crypto load. One of the
vendors that couldn't handle the crypto load went into standards
stalling mode and also a month after the IETF meeting announced a VPN
product that involved adding hardware link encryptors ... which
would then require dedicated links between the two locations (as
opposed to tunneling thru the internet).

....

I would contend that various reasons why we are where we are
... include solutions that have champions with profit motivation as
well as things like ease of introduction ... and issues with being
able to have incremental deployments with minimum disruption to
existing facilities (like browser application based solution w/o
requiring any changes to established DNS operation).

On the other hand ... when we were brought in to consult with the
small client/server startup that wanted to do payment transactions
(and had also invented SSL) ... I could mandate multiple A-record
support (basically alternative path mechanism) for the webserver to
payment gateway TCP/SSL connections. However, it took another year to
get their browser to support multiple-A record (even when supplying
them with example code from TAHOE 4.3 distribution) ... they started
out telling me that multiple-A record technique was "too advanced".

An early example requirement was one of the first large
adopters/deployments for e-commerce server, advertized on national
sunday football and was expecting big e-commerce business during
sunday afternoon halftime. Their e-commerce webserver had redundant
links to two different ISPs ... however one of the ISPs had habit of
taking equipment down during the day on sunday for maintenance (w/o
multiple-A record support, there was large probability that
significant percentage of browsers wouldn't be able to connect to the
server on some sunday halftime).

--
virtualization experience starting Jan1968, online at home since Mar1970

towards https everywhere and strict transport security (was: Has there been a change in US banking regulations recently?)

On 08/27/2010 12:38 AM, Richard Salz wrote:
(For what it's worth, I find your style of monocase and ellipses so
incredibly difficult to read that I usually delete your postings unread.)

It is well studied. I had gotten blamed for online computer
conferencing on the internal network in the late 70s and early 80s
(rumor is that when the executive committee became aware ... 5of6
wanted to immediately fire me ... supposedly there was only one
holdout).

somewhat as a result, there was a researcher paid to sit in the back
of my office for nine months, taking notes on how I communicated,
face-to-face, telephone, computer ... got copies of all incoming and
outgoing email, logs of all instant messages, etc. Besides being a
corporate research report, it was also the basis for several papers,
books and stanford phd (joint between language and computer AI). One
number was that I avg. electronic communication with 275 different
people per week for the 9month period. lots of past posts mentioning
computer mediated communication
http://www.garlic.com/~lynn/subnetwork.html#cmc

in any case, we were brought in to help wordsmith the cal. state
electronic signature legislation. the certification authority industry
was heavily lobbying (effectively) that digital certificates had to be
mandated for every adult.

The certification authority industry, besides doing the SSL domain
name digital certificates were out pitching to wall street money
people a $20B/annum business case (basically all adults with
$100/annum digital certificate). Initially they appeared to believe
that the financial industry would underwrite the certificates. The
financial industry couldn't see the justification for the $20B/annum
transfer of wealth to the certification authority industry. There
were various attempts then to convince consumers that they should pay
it directly out of their own pocket. in payment area, they were also
pitching to the merchants that part of deploying digital certificates
infrastructure, the burden of proof in digitally signed payment
transactions, would be switched to consumers (somewhat like UK where
approx. that has happened as part of payment hardware tokens).

That netted out to consumers paying $100/annum (for digital
certificates), out of their own pocket, for the privilege of having
the burden of proof in disputes shifted to them. that didn't sell
... so there was heavy lobbying all around the world wanting gov
mandating digital certificates for every adult (payed for by the
individual). The lawyers working on the cal. legislation explained why
digital signatures didn't meet the criteria for "human signatures"
(demonstration of human having read, agreed, authorizes, and/or
approved) needed by electronic signature legislation. we got some
patents in the area, the 32nd just granted on tuesday, they are all
assigned, we have no interest and have been long gone for years.
http://www.garlic.com/~lynn/aadssummary.htm

There are a couple issues with new technology uptake ... much more
successful when 1) there is no incumbent technology already in the
niche and 2) there are strong champions with profit motivation and 3)
there is at least some perceived benefit. In the 90s, I would
pontificate how SSL domain name certificates didn't actually provide
any significant security ... but were "comfort" certificates (for
consumers), aka benefit was significantly a matter of publicity.

Better solutions that come along later don't necessarily win
... having incumbent to deal with and are especially at a disadvantage
if there aren't major champions (typically with strong profit
motivation).

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe Hall of Fame (MHOF)

There was effort in the late 70s to replace the large number of
internal microprocessors with 801/risc ... including the low &
mid-range 370s would use 801/risc as native microprocessor (emulating
370). The original AS/400 (followon to s/38) was to also use 801/risc
(as were a large number of other things). The chips ran into problems
and the efforts were abandoned for various reasons. The 801/risc chips
at the time could have fit the bill ... but they were much more
expensive chips and required more expensive hardware support
infrastructure ... that could be done in the PC market place at the
time. Note that ALL the low-end and mid-range 360s&370s were
analogous to Hercules ... in that 370 emulation ran on various kinds
of underlying microprocessors. The microcode technology at the time
avg. approx. 10 native instructions for every 370 instruction (needing
a 5mip native processor in order to achieve .5mip 370 thruput).
http://www.garlic.com/~lynn/subtopic.html#360mcode

The company did have (an extremely) expensive 68K machine that was
being sold by the instrument division.

Part of the reason that some amount of the above still survives was
that in the late 70s and early 80s, I got blamed for computer
conferencing on the internal network (which was larger than the
arpanet/internet from just about the beginning until late '85 or early
'86). Somewhat as a result there was detailed study of how I
communicate ... which included putting in place processes that
logged/archived all my electronic communication (incoming/outgoing
email, instant messages, etc).
http://www.garlic.com/~lynn/subnetwork.html#cmc

--
virtualization experience starting Jan1968, online at home since Mar1970

past couple of years I've frequently made reference to the former
comptroller general ... who retired early so he could be outspoken about
congress and the budget (had been appointed in the late 90s for a 15
year term); frequently making references to nobody in congress for the
last 50 years appeared to be capable of middle school arithmatic.

he was on tv show pushing his book ... and made reference to budget
taking a turn for the worst after congressional fiscal responsibility law expired in 2002 ... which appeared to have allowed congress to pass
the bill that created the worst budget hit of all ... claim was
$40TRILLION in unfunded mandates for partd/drug bill (i.e. worse than
everything else combined that has been done before or since).

60mins earlier had a segment on the behind the scenes things that went
on getting that bill passed. a major item was slipping in a one
sentence (just before the vote) that eliminated competitive bidding
for partd drugs. 60mins had drugs available under partd ... and the
identical drug/brand available thru VA programs (that were obtained
thru competitive bidding) at 1/3rd the cost (i.e. lobbying by drug
industry resulted in enormous windfall to the drug industry).

60mins identified the 12-18 major congressmen and staffers responsible
for sheparding the bill thru (all members of the party in congressional
power in 2003). One of the things was distributing a GAO & congressional
budget office analysis w/o the one sentence change ... but managed to
sidetrack the distribution of the updated analysis (for the effect of
the one sentence change) until after the vote. The followup was that all
12-18 had since resigned their positions and had high paying positions
in the drug industry.

R.Skorupka@BREMULTIBANK.COM.PL (R.S.) writes:
Some people still use real 3270s. I do, but only for consoles and
local (non-SNA) terminals. However I do my regular work on emulator
since day 0.

i kept a real 3277 for a long time because the human factors were so
much better than 3274/3278. we actually had an argument with kingston
about design point of 3274/3278 ... and they eventually came back and
said it was never targeted for online, interactive work ... it was
purely targeted at data-entry use (i.e. keypunch).

part of the problem was that they moved quite a bit of electronics out
of the terminal head and back into the 3274 (reduced manufacturing
costs). with the electronics in the 3277 there were some number of
"human factors" hacks that could be done to improve operation.

it was possible to do a little soldering inside the 3277 keyboard to
adjust the "repeat delay" and the "repeat rate".

the 327x was half-duplex and had nasty habit of locking the keyboard if
hitting key at same time a write happened to go to the screen (really
terrible interactive characteristics). There was a very small fifo box
that was built; unplug the keyboard from inside the 3277 head, plug the
fifo box into the head and plug the keyboard into the fifo box. the fifo
box would queue pending keystrokes when the screen was being written
... to avoid the keyboard locking problem.

the 3272/3277 (ANR) was also much faster than 3274/3278 (DCA) ... there
was a joke that TSO was so non-interactive and so slow ... that TSO
users had no idea that the 3274/3278 hardware combination was enormously
slower than 3272/3277 (becayse so much terminal head electronics for the
3278 had been moved back into the 3274 ... there was an enormous
increase in the DCA controller/terminal head chatter ... that was
responsible for much of the significant slow down).

later with simulated PC ... those that were ANR/3277 simulation had much
faster download/upload transfers than the DCA/3278 simulation (again
because there was so much more controller/terminal head chatter required
by DCA over the coax).

prior to PCs and simulated 3270s ... vm provide simulated virtual 3270s
over the internal network (late 70s, eventually released as product) and
there was an internal developed HLLAPI for simulated keystrokes, logic
and screen scraping called PARASITE/STORY. old posts with PARASITE/STORY
description and sample STORYs:
http://www.garlic.com/~lynn/2001k.html#35 Newbie TOPS-10 7.03 question
http://www.garlic.com/~lynn/2001k.html#36 Newbie TOPS-10 7.03 question

for other topic drift ... circa 1980, STL was bursting at the seams and
they decided to move 300 people from the IMS group to an offsite
building (approx. ten miles away) and let them remote back into the
computers at the datacenter.

They tried SNA remote 3270 support, but the IMS group found the
operation extremely dismal (compared to the local 3270 vm370 support
that they were used to inside STL ... even tho they were
3274/3278). Eventually it was decided to deploy NSC HYPERChannel
"channel extenders" with local 3274/3278 ("channel attached") terminals
at the remote site. For the fun of it, I wrote the software driver for
the HYPERChannel box ... basically scanned the channel program
... created a simplified version which was download (over HYPERChannel
network) to HYPERChannel A51x remote device adatper (aka channel
emulator box at the remote building).

There was no noticable difference in terminal response at the remote
building. However, there was an interesting side-effect at the STL
datacenter ... with overall system thruput improving 10-15%. Turns out
with the mainframes (168, 3033), installations were use to spreading the
3274 controllers over the same channels with disk controllers. The
problem turned out that the 3274s were also extremely slow on the
channel side (besides being slow on the terminal head side) ... with
very high channel busy ... even for the simplest of operations. The
HYPERChannel boxes that directly attached to (real) mainframe channels
were significantly more efficient than the 3274s ... for the identical
operations (the enormous 3274 channel busy time had been moved to the
simulated channel HYPERChannel A51x boxes at the remote site) ... which
resulted in significantly reduced contention with disk operations and
the overall 10-15% increased thruput. misc. old posts mentioning
various things ... including HYPERChannel work
http://www.garlic.com/~lynn/subnetwork.html#hsdt

following post in the same thread ... comment about rewriting IOS for
disk engineering so it would never fail (related to them attempting to
use MVS and finding it had a 15min MTBF in their environment)
http://www.garlic.com/~lynn/2007.html#2

about initial MVS regression tests with injected 3380 errors, MVS
required re-IPL in all the casess (and in 2/3rds of the cases, no
indication what the problem was). misc. past posts mentioning getting
to play disk engineer in bldgs 14&15
http://www.garlic.com/~lynn/subtopic.html#disk

--
virtualization experience starting Jan1968, online at home since Mar1970

I had done dynamic adaptive resource management while undergraduate in
the 60s ... and the company included it for cp67 distribution.

then in the early 70s, the cp67->vm370 morph had a lot of simplification
and most of it was dropped ... which was followed periodically by lots
of pressure from SHARE to have me re-release it for vm370.

then there was the FS period that totally consumed most of the
corporations attention ... and 370 hardware & software product pipelines
were allowed to dry up. when FS was finally killed, there was mad
rush to get products back into the 370 pipeline
http://www.garlic.com/~lynn/submain.html#futuresys

a combination of SHARE pressure and mad rush to get stuff back into
product pipeline was enough to overcome development group NIH ... I was
told to create the resource manager ... which was also selected to be
the guinea pig for starting to charge for kernel software (and i had to
spend lots of time with various parties working on policies for kernel
software charging) ... some past posts about starting to charge for
application software with the 23jun69 unbundling announcement (but
initially, kernel software was still free)
http://www.garlic.com/~lynn/submain.html#unbundle

Somebody from corporate reviewed the resource manager specs and ask
where all the tuning parameters were (the favorite son operating system
in POK was doing a resource manager with an enormous number of manual
tuning parameters). The comment was all "modern" operating systems had
enormous numbers of manual tuning knobs for use by customer specialists
... and my resource manager couldn't be released w/o having some manual
tuning parameters. I tried to explain that a major point of dynamic
adptive resource manager ... was that the resource management did all
the "tuning" ... doing all the work dynamically adapting the system to
different configurations and workloads ... but it fell on deaf ears.

So I was forced to add some number of manual tuning modules and placed
them in a module call SRM ... and all the dynamic adaptive stuff went
into a module called STP (after TV commercial punch line for a product
associated with muscle cars of the 60s ... The Racer's Edge).

I published the detailed description of the operations (including the
components in SRM), how they operated and also published the
code (which was also included as part of the standard source
distribution & maintenance; later I was told, the details were
even taught in some univ. courses). However, there was a joke related
to the nature of dynamic adaptive and feedback control operations
... given that the "degrees of freedom" afforded the (SRM)
manual tuning knobs were less than the "degrees of freedom" allowed
the dynamic adaptive mechanism, over the same components (i.e. the
dynamic adaptive nature could more than compensate for any manual
changes that might be made).
http://www.garlic.com/~lynn/subtopic.html#fairshare

--
virtualization experience starting Jan1968, online at home since Mar1970

Nearly $1,000,000 stolen electronically from the University of Virginia

In the mid-90s, dialup consumer online banking gave pitches on
motivation for moving to the internet (major justification was the
significant cost in supporting proprietary dialup infrastructure
... including all the issues with supporting serial-port modems; one
such operation claimed library of over 60 different drivers for
various combinations of customer PCs, operating systems, operating
system levels, modems, etc).

At the same time, the dialup business/commercial online
cash-management operations were pitching why they would NEVER move
to the internet ... even with SSL, they had a long list of possible
threats and vulnerabilities.

Some of the current suggested countermeasures are that businesses have
a separate PC that is dedicated solely to online banking operations
(and NEVER used for anything else).

in that period ... the executive responsible for IMS had left and joined
a large financial institution in the area ... and for some time was out
hiring IMS developers ... eventually having a larger IMS development
group than STL. Also, email reference (about Jim leaving for Tandem and
palming off database consulting with IMS group to me), makes mention of
foreign entities having IMS competitive product. In any case, STL
management was quite sensitive to issue of competitive IMS development
activity.

the IMS FE service group in Boulder area ... was faced with similar
situation ... being moved to bldg on the other side of the highway
... and similar HYPERChannel (channel extender) implementation was
deployed for them.

In the STL case, there was T3 digital radio (microwave), went to
repeater tower on the hill above STL, to dish on top of bldg. 12 (on
main plant site) and then to dish on roof of off-site bldg. where the
relocated IMS group were sent. After "85" was built (elevated section
cutting the corner of the main plant site), radar detectors were
triggered when autos drove thru the path of the signal (between tower on
the hill and roof of bldg. 12).

In the boulder case, they were moved to bldg across highway from where
the datacenter was located. Infrared T1 modems (sort of higher powered
version of consumer remote controls) were placed on the roofs of the two
bldgs to carry the HYPERChannel signal. There was some concern that
there would be "rain-fade" resulting in transmission interruption during
severe weather. However, during one of the worst storms ... white-out
snow storm when people couldn't get into work, the error monitoring
recorded a slight increase in bit-error rate.

However, there was early transmission interruption that would occur in
the middle of the afternoon. Turns out that as the sun crossed the sky,
it warmed different sides of the bldgs, which resulted in causing the
bldgs to slightly lean in different directions. This slight change in
bldg. angle was enough to throw-off the alignment of the infrared
modems. This resulted in having to position the infrared modem mounted
poles on the roof to make them less sensitive to the way the bldgs.
leaned as different sides were heated and cooled during the course of
the day.

The HYPERchannel vendor attempted to talk the corporation into letting
them release my software support. However, there was strong objection
from the group in POK that was hoping to eventually get ESCON released
(and they felt any improvement in HYPERchannel in the market, would
reduce the chance of ever making business case for shipping ESCON). As a
result, the HYPERchannel vendor had to reverse engineer my software
support and re-implement it from scratch (to ship to customers).

One of the things I had done in the support ... was if I got an
unrecoverable transmission error ... I would simulate a "channel check"
error in status back to the kernel software. This was copied in the
vendor implementation and would result in phone call from the 3090
product administrator several years later. Turns out that the industry
service that gathered EREP data and generated summary reports of error
statistics ... was showing 3090 "channel checks" being 3-4 times the
expected rate.

They tracked it down to HYPERchannel software support generating
"channel checks" in simulated ending status. After a little research, I
determined that IFCC (interface control checks) resulted in identical
same path through error recovery (as "channel checks") and talked the
HYPERchannel vendor into changing their software support to simulate
IFCC for unrecoverable transmission errors.
http://www.garlic.com/~lynn/subnetwork.html#hsdt

About the same time, there was a different problem inside the
corporation ... that seemed to affect STL more than many of the other
labs. I had stumbled across the ADVENTURE game at Tymshare (had been
ported from PDP10 at stanford to the TYMSHARE vm370/cms commercial
timesharing service) ... and managed to obtain a copy of the fortran
source ... and made a copy of the CMS executable on the internal
network. For a period, some number of locations seemed to have all
their computing resources going to employees doing nothing but playing
ADVENTURE (for people demonstrating that they successfully acquired all
points and finished the game, I would send them a copy of the
source). STL management would eventually decree that employees would
have a 24hr grace period, but after that, any employee caught playing
ADVENTURE during standard work hours would be dealt with severely.

In the mid-80s, NCAR did a SAN/NAS like filesystem implementation using
MVS acting as tape<->disk file staging for (non-IBM) supercomputers with
HYPERChannel interconnect. Non-IBM systems would send MVS request for
some data ... MVS would make sure it was staged to disk, download
channel program into the memory of the appropriate HYPERChannel remote
device adapter and return a pointer (for that channel program) to the
requester. The requester then could directly execute the channel program
... flowing the data directly from disk to the requester (over the
HYPERChannel network) w/o requiring it to pass thru MVS (modulo any
tape/disk staging).

This became the requirement for "3rd party transfers" in the later
standardization effort for HiPPI (100mbyte/sec standards form of Cray
channel) & HiPPI switch with IPI-3 disks (i.e. not requiring
transferred data to flow thru the control point) ... and also showed
up as requirement in FCS (& FCS switch) standards meeting (i.e. what
FICON is built on).

In the early 90s, gov. labs were encouraged to try and commercialize
technology they had developed. NCAR did a spin-off of their system as
"Mesa Archival" ... but implementation rewritten to not require MVS
(i.e. support done on other platforms). San Jose disk division invested
in "Mesa Archival" ... and we were asked to periodically audit and help
them whenever we could (they had offices at the bottom of the hill from
where NCAR is located) ... aka San Jose was looking at it as helping
them get into large non-IBM-mainframe disk farms.

--
virtualization experience starting Jan1968, online at home since Mar1970

aka one of the issues with adding new features was that the
development and other product costs had to be covered by revenue flow
... this could be fudged some by combining different products into the
same group and calculating revenue against costs at the
group/aggregate level (leveraging revenue from products where the
costs had been minimized, to underwrite the costs of other products).

--
virtualization experience starting Jan1968, online at home since Mar1970

Nearly $1,000,000 stolen electronically from the University of Virginia

as i've mentioned before, in the mid-90s, consumer online dialup
banking were making presentations about moving to the internet
... largely motivated by the significant consumer support costs
associated with proprietary dialup infrastructure (one presentation
claimed library of over 60 drivers just to handle different
combination of serial-port modems, operating system levels, etc; the
problems with serial-port conflicts are well known and was major
motivation for USB .... although there seemed to have been a serious
lapse at the beginning of the century with an attempted deployment of
a new financial related serial-port device).

in the same mid-90s time frame, the commercial/business online dialup
cash-management (aka banking) were making presentations that they
would NEVER move to the internet because of the large number of
threats and vulnerabilities (still seen to this day).

current scenario has countermeasure recommendation (for
internet based commercial online banking) is that businesses should
have a separate PC that is solely dedicated to online banking
and NEVER used for any other purpose.

--
virtualization experience starting Jan1968, online at home since Mar1970

Baby Boomer Execs: Are you afraid of LinkedIn & Social Media?

I was blamed for computer conferencing on the internal network in the
late 70s and early 80s (some estimate that between 20,000 & 30,000
employees were reading some amount of the material, if not directly
participating). Folklore is that when the executive committee first
learned of the internal network and computer conferencing, 5of6
members wanted to fire me. misc. past posts mentioning internal
network:
http://www.garlic.com/~lynn/subnetwork.html#internalnet

Somewhat as a result, researcher was paid to sit in the back of my
office for 9 months, taking notes on how I communicated. They also got
copies of all my incoming and outgoing email and logs of all instant
messages. Result was research report as well as material for papers,
books, and Stanford PHD (joint with Language and Computer AI ... in
the area of computer mediated communication). One number from the
study was that I communicated electronically directly with an avg. of
270 different people per week (for the 9months of the
study). misc. past posts mentioning computer mediated communication:
http://www.garlic.com/~lynn/subnetwork.html#cmc

For the most part, self-signed certificates are a side-effect of the
software library; they are actually just an entity-id/organization-id
paired with a public key. They can be used to populate a repository of
trusted public keys (with their corresponding
entity-id/organization-id); effectively what is found preloaded in
browsers as well as what is used by SSH.

The difference in a Certification Authority paradigm, is that the
public keys (frequently encoded in self-signed certificates), from
some (say browser) trusted public key repository, can be used to
extend trust to other public keys (by validating digital certificates
which contain other public keys). The methodology has been extended to
form a trust chain ... where cascading public keys are used to extend
trust to additional public keys (pared with their
entity-id/organization-id).

In the Certification Authority paradigm, all public keys from a user's
(possibly browser) trusted public key repository are accepted as being
equivalent ... reducing the integrity of the overall infrastructure to
the Certification Authority with the weakest integrity (if a weak
integrity CA, incorrectly issues a certificate for some well known
organization, it will be treated the same as the correctly issued
certificate from the highest integrity CA).

Another way of looking at it, digital certificates are messages with a
defined structure and purpose, that are validated using trusted public
keys that the relying party has preloaded in their repository of
trusted public keys (or has been preloaded for them, in the case of
browsers).

If you are going to maintain your own trusted repository ... what you
are really interested in is the trusted server URL/public key pair
(contained in the certificate) ... the certificates themselves then
become redundant and superfluous and all you really care is if the
(server's) public key ever changes ... in which case there may be
requirement to check with some authoritative agency as to the "real"
public key for that server URL.

We had been called in to consult with small client/server startup that
wanted to do payment transactions on their server, the startup had
also invented this technology called SSL they wanted to use. Part of
the effort was applying SSL technology to processes involving their
server. There were some number of deployment and use requirements for
"safe" SSL ... which were almost immediately violated.

--
virtualization experience starting Jan1968, online at home since Mar1970