DNSSEC is somewhat catch-22 for the Certification Authority
industry. Originally, SSL was to offset various perceived weaknesses
in DNS infrastructure. Improving the integrity of the DNS
infrastructure, mitigates the justification for SSL.
http://www.garlic.com/~lynn/subpubkey.html#catch22

Also, when we were doing this stuff that is now called electronic
commerce (with SSL), we had to do various walkthru and audits of (SSL)
CAs. Basically they require a lot of identification information from
an SSL applicant, which then they have to match with what is on file
at DNS as to the domain owner. As a result, DNS is the REAL trust
root for SSL (with CA process somewhat obfuscating the fact). If you
can't trust key fingerprints or other details obtained from DNS
... then how can the CAs trust DNS for information about the domain
owner (for issuing a certificate). CAs need improved DNS integrity for
the information they need, which would also improve DNS integrity for
DNSSEC information.
http://www.garlic.com/~lynn/subpubkey.html#sslcerts

--
virtualization experience starting Jan1968, online at home since Mar1970

zSecurity blog post - "READ is not benign"

We had been brought in to consult with small client/server startup
that wanted to do payment transactions on their server, the startup
had also invented this technology called SSL they wanted to use; the
result is now frequently called electronic commerce.
http://www.garlic.com/~lynn/subnetwork.html#gateway

Somewhat as a result, in the mid-90s, we were asked to participate in
the x9a10 financial standard working group which had been given the
requirement to preserve the integrity of the financial infrastructure
for all retail payments. Part of the x9a10 work was detailed threat &
vulnerability studies of the large number of different retail payments
(credit, debit, stored-value, point-of-sale, unattended, face-to-face,
internet, wireless, high-value, low-value, transit turnstile, contact,
contactless, etc). The result was the x9.59 financial standard which
slightly tweaked the current paradigm to eliminate crooks being able
to use knowledge of account numbers or harvested information from previous
transactions, to perform fraudulent financial transactions.
http://www.garlic.com/~lynn/x959.html#x959

We were also tangentially involved in the cal. state data breach
legislation when we were asked in to help wordsmith the cal. state
electronic signature legislation. several of the other participants
were also heavily involved privacy issues and had done detailed
consumer surveys. The number one privacy issue was "identity theft",
mainly the form of "account fraud" where crooks use information
harvested from previous transactions to perform fraudulent
transactions (i.e. a kind of replay-attack, as a result of skimming,
data breaches, etc;). NOTE: x9.59 standard did nothing to prevent such
skimming and data breaches ... it just eliminated crooks being able to
use the information for performing fraudulent transactions.
http://www.garlic.com/~lynn/subpubkey.html#signature

There seemed to be little or nothing being done about major source of
such information from data breaches. Part of the issue, is the parties
needing to protect/secure the information, aren't at risk from the
breaches (the account holders are at risk, not the parties with the
respositories breached). W/o something like the data breach
legislation, there seemed to be no way to motivate the parties to
protect the transaction databases (as long as the current payment
transaction paradigm exists ... transition to x9.59 and changing the
paradigm, eliminates the criminal motivation for the breaches since
the information is no longer useful for performing fraudulent
transactions).
http://www.garlic.com/~lynn/subintegrity.html#harvest

In the major breaches in the news ... the threat/vulnerability is READ
access ... havesting the information for purposes of (form of
replay-attack) performing fraudulent transactions (x9.59 standard does
eliminate such READ access threat/vulnerability ... since the
information no longer is sufficient for performing fraudulent
transactions).

--
virtualization experience starting Jan1968, online at home since Mar1970

The original (merchant server/electronic commerce) SSL deployment
requirement/assumption was that the user supplied the URL and
understood the relationship between the webserver (they thot they were
talking to) and the URL. Then SSL would validate that the webserver
(that they were really talking to) corresponded to the URL
(countermeasures to various kinds of DNS weaknesses). This was
necessary two part process to guarantee that the webserver, a user
thot they were talking to, was the webserver that they were actually
talking to.
http://www.garlic.com/~lynn/subpubkey.html#sslcerts

That was almost immediately violated when merchants discovered that
SSL cut their thruput by 90-95% and dropped back to only using SSL for
checkout/paying. The current paradigm has user clicking on a
checkout/pay button (from an unvalidated webserver) ... which supplies
the URL (not the user).

Users now tend to have little or no understanding about relationship
between the webserver they think they are talking to, and the
corresponding URL. As a result, SSL is reduced to just validating that
the webserver, that a user is talking to, is whatever webserver it
claims to be. An attack is a fraudulent merchant server (that hasn't
been validated) has been able to obtain an SSL certificate for some
arbitrary URL (that was created thru some front company) ... which is
then used for the "pay button".

Another attack (that has happened periodically over the years), is
domain name hijacking where an attacker is able to update domain
ownership information at some arbitrary DNS operation ... and then
applies for SSL certificate (with their own public key) from any SSL
CA. Relatively trivial to register a front company (for fraudulent
activity) and then they will guarantee that the information on the SSL
certificate application matches the domain name ownership information
(on file at DNS).

Part of DNSSEC proposals has user registering public key at the same
time they register a domain name. Then all future communication is
digitally signed and validated with the on-file public key (as
countermeasure to domain name hijacking) ... which also eliminates a
vulnerability to the SSL CA institutions.

In fact, then SSL CAs could reguest SSL certificate applications be
digitally signed ... which they can (also) vaildated by real-time fetch
of the on-file public key from the DNS infrastructure (changing an
error-prone, time-consuming and expensive identification/matching
process into an inexpensive, reliable and efficient authentication
process). A catch-22 might be the rest of the world also doing
real-time fetches of on-file public keys ... eliminating the
requirement for SSL certificates.
http://www.garlic.com/~lynn/subpubkey.html#catch22

--
virtualization experience starting Jan1968, online at home since Mar1970

JES2 vs. JES3

before being con'ed into going to POK to be in charge of loosely-coupled
architecture, my wife did a stint in the JES group ... including working
on spec. for JESUS (JES Unified System) ... taking all the features,
that customers couldn't live w/o, from both JES (slightly earlier, she
had been part of the catchers for ASP in the JES group). However, the
polarization of the two sides prevented much progress being made.

in POK, she did peer-coupled shared data architecture ... which saw
little uptake (except for IMS hotstandby) until sysplex (contributing to
her not staying long in that position; that & periodic battles with SNA
organization about demands that SNA be mandated for peer-coupled
coordination communication).
http://www.garlic.com/~lynn/submain.html#shareddata

--
virtualization experience starting Jan1968, online at home since Mar1970

one of the funny things ... the major use of SSL in the world today
is for hiding account number and transaction information ... related
to this earlier work we had done for "electronic commerce".

However, the x9.59 financial standard work eliminates the threat (of
fraudulent financial transactions) as a result of this information
leaking (x9.59 slightly changes the paradigm as part of preserving
the integrity of the financial infrastructure for all retail
payments). x9.59 did nothing to prevent information leakage,
skimming, data breaches, etc ... it just eliminated the threat when
such activities occur. As a result, x9.59 also eliminates the need for
SSL (in its major use associated with "electronic commerce" and hiding
account number and transaction information).
http://www.garlic.com/~lynn/x959.html#x959

--
virtualization experience starting Jan1968, online at home since Mar1970

Cyber criminals seek 'full' sets of credentials that trade for only a few pounds

Basically current online banking and payment infrastructures are
subject to various kinds of replay attacks ... once the criminals have
obtained one or more pieces of static information.

We had gotten asked to come in to consult with a small client/server
startup that wanted to do payment transactions on their server. The
startup had also invented this technology called "SSL" they wanted to
use ... and the result is now frequently called "electronic commerce".

Somewhat as a result, in the mid-90s, we were asked to participate in
the x9a10 financial standard working group, which had been given the
requirement to preserve the integrity of the financial infrastructure
for ALL retail payments. After detailed threat and vulnerability
studies of wide variety of transaction types & environments, x9a10
came up with the x9.59 financial transaction standard.
http://www.garlic.com/~lynn/x959.html#x959

One of the things that x9.59 did was slightly tweak the paradigm and
eliminate the vulnerability from crooks obtaining static information
(frequently from previous transactions) that allowed them to perform
fraudulent (payment &/or online banking) financial transactions.
x9.59 did nothing to eliminate skimming, data breaches,
havesting, evesdropping and/or other mechanisms used to gather static
information (for replay in fraudulent financial transactions) ... it
just eliminated the usefulness such gathered static information to the
crooks (for performing fraudulent financial transactions).

--
virtualization experience starting Jan1968, online at home since Mar1970

from above:
The answer is, yes and no. Security professionals recognize that total
security can never be achieved. Instead, one can only hope to contain
the problem by implementing processes that seek to minimize the scope
of the software problems and the attackable surface on which
cyber-criminals feed

... snip ...

as previously posted ... the x9a10 financial standard working group
approach to vulnerability of information in financial transactions
... was to break the cycle of ever increasing sophisticated attacks
and countermeasures ... and eliminate the information vulnerability by
slightly tweaking the paradigm. It does nothing directly to prevent
such attacks ... but eliminates the financial motivation to crooks,
since they are no longer able to leverage the information obtained for
performing fraudulent financial transactions.

--
virtualization experience starting Jan1968, online at home since Mar1970

JES2 vs. JES3

part of ibmmain thread from 2000 that ASP was same IBM group
that did Direct Couple at the LA Science Center (i.e. ASP
traces back to 7040/7090 direct couple system):
http://www.garlic.com/~lynn/2000.html#77

there is some analogy with autos and the transition from needing
professional operator to available for general consumer use. it took
decades of vehicle and road safety engineering to get where we are now
(recent news that auto accident deaths had drop to lowest level since
1950).

Part of the PC heritage was that it was designed for dedicated,
offline use ... with no countermeasures for operation in hostile
environment (except user beware, sort of the auto story for nearly a
century). It evolved some with network support for small, closed, safe
networking environments (automatic execution of lots of things
arriving from the network w/o any safeguards and countermeasures for
malicious activity). It was easy to extend the network support (done
for closed, safe, environment) to the wild anarchy of world wide
internet ... but at enormous consequences.

on the internal network (closed, corporate network, larger than the
arpanet/internet from just about the beginning until late '85 or early
'86) there was automatic execution exploit in the 70s ... except not
very malicious. That resulted in lots of studying prohibiting
automatic execution. Later in the 80s, some of the technology from the
internal network was used in the education network bitnet (in the us)
and earn (in europe). That had a worm almost exactly 12 months before
the morris worm on the internet.

Internally, after the 23jun69 unbundling announcement ... starting to
charge for application software, se services, etc ... the internal
cp67 HONE service was launched ... originally as mechanism for
providing branch SEs with hands-one experience (a lot of SE
experience was hands-on at customer site, which was severely reduced
after unbundling announcement).
http://www.garlic.com/~lynn/subtopic.html#hone

The science center had also ported apl\360 to cp67/cms for
cms\apl. There was a large number of sales & marketing support
applications that were developed on cms\apl ... and eventually that
began to dominate all activity on HONE (SE hands-on operating system
experience in virtual machines disappearing). Fairly quickly,
mainframe orders weren't even accepted unless they had been first
processed by a number of HONE applications. After the consolidation of
US HONE datacenters (in the mid-70s, bldg. next to where facebook is
now located), US HONE had over 30,000 userids by the late 70s. HONE
clones also were sprouting up all over the world.

One of the things that started happening in the late 70s ... was that
a lot of datacenters were starting to burst at the seems ... and you
saw customers putting in new generation of mid-range machines (vax'es
from dec and 43xx from ibm) in all sorts of places (resulting in big
explosion in these machines) that didn't require expensive datacenter
operation. 43xx and vax'es sold fairly similar numbers in the orders
with small number of machines. big boost for 43xx aggregate numbers
was large customers ordering hundreds of 43xx machines at a time
.... and putting them out into every nook & cranny.

One of the side-effects of this ... was the only disk division "new"
mid-range disk was the FBA 3370 ... and MVS had failed to come out
with any FBA device support. This pretty much left them out of this
new exploding market ... leaving it to vm370, vs/e, etc. Eventually
they did CKD simulation on FBA3370 and called it "3375" ... to give
MVS some entry into the midrange market (MVS still doesn't have FBA
support ... and all current CKD offerings are done with simulation on
top of underlying FBA hardware).
http://www.garlic.com/~lynn/submain.html#dasd

--
virtualization experience starting Jan1968, online at home since Mar1970

ping

Peter Flass <Peter_Flass@Yahoo.com> writes:
I was wondering too, but this is always a slow time. Labor Day in the
US probably keeps many people off-line for a week or so, and colleges
are just getting going for the year. Does anyone have a count of
postings by week for the last few years?

They let me play disk engineer in bldg 14 (disk engineering) and bldg
15 (disk product test). They were doing stand-alone disk testing with
simple RYO monitor ... with scheduling round-the-clock and on
weekends. They had attempted to try MVS (to be able to do multiple
concurrent testing), but MVS had 15min MTBF in that environment. I
offered to do IOS rewrite to make it absolutely bullet-proof and never
fail ... allowing on-demand, multiple concurrent testing anytime
(vastly improving productivity). However, as a result I also got
pulled into diagnosing, resolving, and working on all sorts of disk
issues.

On the other side, I got to have much of the rest of the processor
time .... since disk testing tended to be very I/O intensive and
hardly used more than couple percent of the CPU. They tended to get
1st available processor after the processor engineers (for disk
testing). As a result I had better access to early 3033, 4341, etc
... than most of the labs. I had done a lot of work on original ECPS
for 138/148 working with endicott ... so I got asked to do various
early testing with the disk engineering 4341 (for various endicott
groups; including performance benchmarks) ... because I had better
access than they did.

One of the evolving 4341 issues was that a cluster of 4341s was less
expensive, easier to deploy, had higher aggregate mip rate, higher
aggregate disk thruput, larger aggregate thruput ... than 3033
... which led to some interesting internal political battles during
the period.

I believe the referenced machine was the one that tymshare obtained a
copy of adventure from, for porting to vm/cms. I managed to obtain a
copy (very early on) for distribution over the internal network (I
would provide original source to internal users demonstrating that
they had obtained all points). There was a period when development
appeared to nearly stop at many internal labs because so many were
playing adventure. I remember STL announcing a 24hr grace period
... after which, any person caught playing adventure during normal
working hrs would be dealt with severely.

more mainframe crashing .... across the bay in berkeley ... there was
story about cdc6600 crashing every tuesday morning (10am?) ... with
thermal. they eventually tracked it to drop in water pressure; they
watered the grass at that time every week ... simultaneously with lot
of people getting out of class and the large number of flushing
toilets ... resulted in loss of water pressure to the cooling units
in the datacenter.

--
virtualization experience starting Jan1968, online at home since Mar1970

there was also the speed-matching buffer allowing 3880s to be attached
to 168s & 3033s (stage 3mbyte disk data transfers to 1.5mbyte channel
speed) ... code name "Calypso" ... which had a whole load of
problems. 168 wasn't too bad ... but 3033 was really slow since the
303x channel director was really a 158 in disguise ... and was just
about the slowest channels from 370 (high overhead for command
processing, etc). One of the other things about 4341 (from earlier
mention of 4341 vis-a-vis 3033), was 4341 channel interface could just
about operate at 3mbytes w/o any problems.

There were all sort of short & long term effects with MVS not
supporting FBA. They had told me that even if I provided them
integrated & tested support ... it would still cost $26M to ship MVS
support for FBA (documents, education, training, etc). I needed to
show business case with incremental revenue something like ten times
the $26M ... and wasn't allowed to use long term life cycle cost as
part of the justification (and since customers were already buying
DASD as fast as it could be built ... the story was that if FBA
support shipped, customers would just switch to buying as much FBA as
CKD .... showing no incremental revenue).

With FBA, speed matching becames significantly simpler ... also
long-haul propagation latency becomes simpler (i.e. ESCON and FICON
with distances exceeding walls of datacenter) ... and of course ... it
totally eliminates all the extra stuff that had to be done now when
all the disk hardware is "REALLY" FBA (3375/3370 was just the sign of
things to come).

in the early 70s, the "Future System" project started ... largely
motivated by clone controllers; it was going to be as different from
360/370 as 360 had been different from earlier generations. During the
FS period ... the 370 product pipeline was allowed to go dry (since FS
was going to completely replace it).
http://www.garlic.com/~lynn/submain.html#futuresys

When FS was finally killed off ... there was mad rush to get products
back into the 370 (hardware & software) product pipeline ... and in
parallel, start work on the 370 follow-on generation. The stop-gap at
the high-end was to take the 158 engine, drop the 370 microcode
... just leaving the integrated channel microcode, repackinging it as
303x channel director. 3031 then was 158 engine with just the 370
microcode (and w/o the integrated channel m'code) and a 2nd 158 engine
as the 303x channel director. 3032 was 168-3 repackaged to work with
303x channel director. 3033 started out as 168 wiring diagram mapped
to 20% faster chips. The chips also had something like ten times the
circuits/chips ... initially going unused. Before ship, some of 168
logic was redone to take some advantage of additional circuits/chip
... resulting in 3033 shipping about 50% faster than 168-3.

In parallel, there was "e-architecture" for the low and mid-range
(code name for 4341 was "E4") ... and "XA" tailored to high-end MVS
(code name for "XA" was "811"). POK managed to convince corporate that
they wouldn't be able to make mvs/xa ship schedule unless vm370 was
killed, the product group shutdown, and all the people moved to POK to
be part of mvs/xa development (Endicott eventually managed to save the
vm370 product mission but had to reconsitute a development group from
scratch). They were going to delay telling the vm370 group of the
shutdown/transfer until the last possible minute ... minimizing the
number of people that might find jobs in the local area. However, the
information leaked to the group a few months early (resulting in witch
hunt to identify who had leaked the information). Some number of
people were able to find alternatives in the local area working on VMS
for DEC (resulting in joke that the head of POK was major contributor
to DEC VMS)

Part of mvs/xa development was the internal-only virtual machine
"vmtool". Much later, a flavor of vmtool was shipped to customers as
migration aid (from mvs) to mvs/xa ... and then later, it would morph
into vm/xa product. I've got misc. old email about the vm/xa product
group being pressured by the mvs group to not support FBA devices
... being asked to make statements that FBA didn't make any sense
since CKD was so much better than FBA (basically as part of supporting
MVS's lack of FBA support). Of course it is now clear that that is all
ridiculous, with all disk technology now being FBA ... and MVS
requiring a layer of CKD emulation on top.
http://www.garlic.com/~lynn/submain.html#dasd

One of the side-effects of including mention of the MVS 15min MTBF in
an internal only report about testing environments in engineering and
product test labs ... was that the MVS organization took offense and
proceeding to block/non-concur with any corporate awards for the
effort (actually for any corporate award).

Getting closer to release for 3380, MVS was still having significant
problems with their disk support. This is old email reference to
engineering having a standard regression bucket of 57 typically
expected errors (that they would create & test for) ... and MVS would
fail and require re-ipl/reboot in all cases (in most cases, not even
give indication of what caused the failure).

Morten Reistad <first@last.name> writes:
I don't know about performance, but I know from pioneers in the
travel business that reservation systems were all unstable and pretty
fragile until somehwere around 1982; when the industry responded.

After that they could enter search lists spanning 3-4 lines in some
obscure syntax.

... for other drift ...

in the mid-80s, my wife did short stint as chief architect for AMADEUS
... which started out being based on the old EASTERN (system/one that
had ran on 370/195). she didn't last long because she backed x.25 (as
opposed to sna) and the sna forces got her removed (which didn't help
them much since AMADEUS went x.25 anyway; there is a big thick AMADEUS
design document that still survives in a box somewhre in storage).

About the early 80s, ACP was renamed TPF ... to reflect some uptake by
other industries (not just airlines) and by financial networks (not
just restricted to reservation systems).

This was in 308x period ... while ACP/TPF had cluster support ... it
lacked multiprocessor support ... which caused the corporation some
amount of problems ... since 308x originally wasn't going to be
available in other than multiprocessor. This contributed to a bunch of
unnatural things being done to vm370 ... in order to try and give
virtual ACP/TPF improved thruput on 3081 ... pretty much at the expense
of every other customer vm370 workload on 308x.

Eventually they got around to removing a processor from (two-processor)
3081, resulting in the 3083. There were all sort of issues ... one of
them was the way things were layed out in 3081, the straight-forward
solution was to just remove processor-1 (leaving only processor-0)
... except all the "wiring" for processor-0 had it at the top of the box
... which would have left the box seriously top-heavy.

In the mid-90s, we got brought into one of the major airline res.
systems ... to look at their 10 impossible things that they (wanted to
do &) couldn't do. Part of the issue was primitive data
management facilities (which contributed to some amount of its
thruput). As a result, major data management was done on traditional
MVS system ... and then once or twice a month ... there was major
"rebuild" on the production res. system. This required taking the
system down & trying to limit the outage to single shift. It was
also getting harder to find a good shift to be down ... as system was
being used world-wide. Sunday night had been traditional ... but that
was already 1st shift monday in the far east. One of the off-spring
had (college) part-time job working answering phones at air freight
company ... which required access to the res system for freight
shipments ... and it was not unusual for rebuild outage to extend into
monday morning.

The arcane system and language capabilities had rather restricted
labor pool ... as a result this operation had couple thousand
employees with inflated salaries (apparently quarter million was
pretty run of the mill). The solution I did ... solved all ten
impossible things ... but also did it on a different platform ... and
eliminated the manual tasks being performed by several hundred of
these individuals. We were then told that they hadn't actually wanted
for us to solve the problem, just to be able to tell the board for the
following five years that we were working as consultants (one of the
board members in prior life had been executve at STL, before leaving
to take CIO job in financial industry and then airline industry). The
rewrite also collapsed something like three separate (complex, obscure
syntax) interactions into single much simpler operation (besides the
complex obscure syntax ... there were a sequence of operations that
had to be done by experienced, trained person).

the science center, besides doing virtual machines, GML, lots of
online tools ... port of apl\360 to cms for cms\apl ... and bunch of
other stuff ... did a lot of work on performance measurement,
performance reporting, workload profiling, system profiling and
performance modeling (both event simulation as well as analytical
modeling done in apl) ... some of it eventually morphing into capacity
planning.
http://www.garlic.com/~lynn/subtopic.html#545tech

in the wake of 23jun69 unbundling announcement, the corporation
created the (virtual machine based cp67) HONE system ... to give SEs
"hands-on" experience with operating systems (prior to starting to
charge for SE services, lots of SE experience was part of large groups
at customers sites). However, they also started to deploy a lot of
(cms\apl based) sales & marketing support applications on HONE
... which came to dominate all activity and the virtual guest
operating system testing died off.
http://www.garlic.com/~lynn/subtopic.html#hone

One of the science center workload & configuration analytical modeling
tools was packaged as a HONE application (Performance Predictor),
where sales & marketing types could enter customer workload and
configuration details and then ask "what-if" questions regarding what
happens when there are changes to workload and/or configuration.

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe Slang terms

Internally there was a similar tool ... which would automatically do
load balancing and thruput profiling. The issue is for 3380 to give
same thruput as 3350 ... the 3380 had to be held to about 80%
allocation (aka the amount of data on 3380 increased by a much larger
factor than the increase in speed). The thruput profiling and load
balancing ... was careful to keep the 3380 allocation limited to not
have worse thruput than 3350.

There were some advice for policy allocation limited 3380s could have
very low used data loaded on the remaining 20%.

At SHARE there was a semi-facetious proposal to offer a "high-speed"
3380 ... basically a normal 3380 where a special microcode load
... only allowed allocation on half the cylinders .... and charge more
for these "high-speed" 3380s (i.e. it was for admin types that didn't
understand false economy where the system degradation from loaded 3380
cost significantly more than relatively trivial $$/byte savings from
fully loaded 3380.

I had started writing in the late 70s about disks were showing
significant slower relative system thruput. In the early 80s, some
disk division executives took offense and assigned the division
performance group to refute my statements. After a few weeks, they
came back and basically said that I had slightly understated the case
(at the time, statement that disks had a factor of 10 times relative
system thruput decline). The analysis was turned into SHARE
presentation on organizing disk allocation for system thruput
.... session B874 at SHARE 63, 8/18/84 ... bits and pieces from the
presentation spread thru a couple of posts
http://www.garlic.com/~lynn/2001l.html#46 MVS History (all parts)
http://www.garlic.com/~lynn/2006f.html#3 using 3390 mod-9s
http://www.garlic.com/~lynn/2006o.html#68 DASD Response Time (on antique 3390?)

this old reference has table from '83 that motivated disk executives
to ask the performance group to refute the claims (table works with
fixed font)
http://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door

--
virtualization experience starting Jan1968, online at home since Mar1970

A mighty fortress is our PKI, Part III

On 09/16/10 00:10, James A. Donald wrote:
That is rather like having a fortress with one wall rather than four
walls, and when attackers go around the back, you quite correctly
point out that the wall is only designed to stop attackers from
coming in front.

the one i was fond of using during most of the 90s ... was a massive
bank vault door in the middle of a open field ... with no walls,
floor, ceiling, etc; operations would take prospects on tour of the
field to show off how substantial the vault door was (impress them
with spending all their money on the door, and make them forget they
also needed a vault).

when i was undergraduate in the 60s, the univ. library had ONR grant
to do online catalog ... some of the money went to purchase
2321/datacell. the project was also selected to be betatest site for
CICS product ... and I got tasked to support/debug the implementation
(I remember shooting CICS bdam open bug ... involving library
selecting different bdam options than the original implementation).

the two early spin-offs from cp67 (& science center) were NCSS and IDC
... doing online commercial time-sharing service. Later, another
virtual machine based (with vm370) commercial time-sharing service was
Tymshare.

Most of the (virtual machine based) online commercial time-sharing
services made large number of enhancements to cp67 & vm370. NCSS
porting cp67 to 370 and renaming vp/css.
https://en.wikipedia.org/wiki/National_CSS

above also mentions one of the first "hacking" events ... covered in
NYT article 26july81.

from above:
The number of reported data breaches have skyrocketed since numerous
states, including Pennsylvania and New Jersey, adopted new data breach
notification laws in 2005 and 2006.

... snip ...

We were tangentially involved in the (original Cal) state data breach
notification legislation. We had been brought in to help wordsmith the
electronic signature legislation and several of the parties were
heavily involved in privacy issues.
http://www.garlic.com/~lynn/subpubkey.html#signature

They had done detailed (citizen) privacy surveys and no. one issue
was "identity theft" ... significant part was form of "account fraud"
as a result of data breaches. One of the issues with regard to data
breaches ... is crooks use the information to perform fraudulent
financial transactions against the account holders ... i.e. there is
no direct threat against the institutions where the breaches occur
... and therefor they had much less motivation to take
countermeasures.
http://www.garlic.com/~lynn/subintegrity.html#harvest

--
virtualization experience starting Jan1968, online at home since Mar1970

we had been brought as consultants to small client/server startup that
wanted to do payment transactions on their server ... the startup had
also invented this technology they called "SSL" they wanted to use;
the result is now frequently called "electronic commerce"

somewhat as a result, in the mid-90s, we were asked to participate in
the x9a10 financial standard working group which had been given the
requirement to preserve the integrity of the financial infrastructure
for all retail payments (aka debit, credit, stored-value, gift-card,
face-to-face, point-of-sale, internet, high-value, low-value, contact,
contactless, wireless, transit turnstile, aka ALL) which resulted in
the x9.59 financial transaction standard. Part of the standard was
slightly tweaking the existing paradigm to eliminate skimming,
evesdropping, data breach and other similar threats involving
harvesting "static" data for performing fraudulent transactions (did
nothing to prevent such activities, just eliminated threat that such
activities could use the information for performing fraudulent
transactions). Part of x9.59 is format agnostic and allowing
authentication proportional to risk (i.e. possibly single-factor for
low-value ... and exact some components also doing various levels of
multi-factor authentication for higher values).
http://www.garlic.com/~lynn/x959.html#x959

The cost for such components could be deployed in such a way that the
aggregate, overall infrastructure expense is less than current
paradigm ... and has sufficient integrity that the cost to compromise
would always be higher than any expected resulting fraud (standard is
also format agnostic).

Part of the issue is in the current infrastructure, major portion of
existing fraud is born by merchants (in the form of various fees and
other charges). Raising the bar on existing retail payment fraud would
likely drive the crooks to switching to attacks involving opening new
accounts (rather than attacks on existing accounts) ... including
using "synthetic ids" (no actual corresponding person) ... which would
shift all of the burden to the financial institutions.

--
virtualization experience starting Jan1968, online at home since Mar1970

A question about HTTPS

Browser parses URL and asks name server for ip-address that
corresponds to the host name. Browser then does a TCP connection
... default to port 80 (for HTTP) or port 443 (for HTTPS) on the
target host.

In the early days, the use of TCP for HTTP/HTTPS resulted in enormous
strain on webservers ... since there had been some assumption that TCP
was long-running sessions and the session close implementation wasn't
particularly optimized. For a period, heavily loaded webservers were
spending 90% of their cpu in TCP session close processing (until
optimized implementations started appearing).

We had been brought in to consult with small client/server startup
that wanted to do payment transactions on their server, the startup
had also invented technology they called "SSL" they wanted to use; the
result is now frequently called "electronic commerce".

Since we had approval authority over this part of the implementation,
one of the mandates was mutual SSL authentication (implementation
hadn't existed up until then) and "multiple-A" record support i.e. DNS
returning a list of ip-addresses, instead of just one (in case of
problems with connecting, attempts would be made to establish TCP
connection with each address in the list until successful). However,
we didn't have sign-off authority on the standard browser
implementation ... so it took nearly another year before there was
multiple-A record support on the browser side.

--
virtualization experience starting Jan1968, online at home since Mar1970

Will new card innovation help interchange and improve retention?

Interchange networks can be considered examples of
value-added-networks that grew up during the 70s & 80s ... and which
were mostly obsoleted by the spreading, ubiquitous internet in the
90s. Interchange networks are one of the few remaining examples and
possibly could go the way of the other value-added-networks with
improvements in (internet) transaction integrity and availability.

We had been called in to consult with small client/server startup that
wanted to do payment transactions on their server; they had also
invented this technology called "SSL" they wanted to use; the result
is now sometimes called "electronic commerce".

Somewhat as a result, in the mid-90s, we were asked to participate in
the x9a10 financial standard working group which had been given the
requirement to preserve the integrity of the financial infrastructure
for ALL retail payments (i.e. debit, credit, stored-value, ACH,
point-of-sale, unattended, internet, wireless, contact, contactless,
high-value, low-value, transit-turnstile, aka ALL). After some
amount detailed end-to-end threat and vulnerability studies of all the
environments, the x9.59 financial transaction standard resulted.
http://www.garlic.com/~lynn/x959.html#x959

Part of X9.59 standard was making it format agnostic (card, cellphone,
dongle, token etc ... could work in any and all environments). Another
part of x9.59 was to slightly tweak the paradigm and eliminate the
evesdropping, skimming, data breach, etc vulnerabilities (didn't do
anything to prevent such activity ... just eliminated the ability of
crooks to use the harvested information for performing fraudulent
financial transactions).

Attempting to meet all the other criteria ... x9.59 also enabled
person-centric paradigm (allowing standard to work with institution
provided authentication device ... like an institution issued card
... as well as with a consumer provided authentication device ... like
user provided hardware token or smartphone).

Note that the major use of SSL in the world today is to hide
transaction details as countermeasure to crooks being able to use the
information for fraudulent financial transactions. X9.59 eliminated
that as a vulnerability ... and therefor also eliminates this earlier
work we had done using SSL for "electronic commerce"

--
virtualization experience starting Jan1968, online at home since Mar1970

z/OS, TCP/IP, and OSA

PaulGBoulder@AIM.COM (Paul Gilmartin) writes:
IIRC, the first TCP/IP interface we had was an Intel Fastpath. I believe
it was genned as a CTC. I found a 1988 Network Workd article mentioning
Intel Fastpath Model 9750D and Interlink Model 3732.

original ibm mainframe tcp/ip product was done for vm and could consume
nearly 3090 processor doing 44kbytes/sec. transfer. I added rfc1044
support and in some testing at cray research managed to get channel
media thruput using modest amount of 4341 processor (maybe factor of 500
times improvement in instructions executed per bytes move). lots of past
posts mentioning adding rfc 1044 support
http://www.garlic.com/~lynn/subnetwork.html#1044

the base tcp/ip product was ported to mvs (and offerred as product) by
adding simulation for the required vm kernel functions.

later there was contractor hired to add tcp/ip support to vtam ... the
folklore is that it was explained to the contractor in no uncertain
terms ... that there would be no "valid" (aka acceptable) tcp/ip
implementation (in vtam) that outperformed lu6.2 (the folklore was the
explanation had to be done after the contractor submitted the initial
implementation that didn't meet the "valid" criteria).

we had been doing internal network with T1 links and working with
various parties on what was to become NSFNET backbone (operational
precursor to modern internet) ... we would claim that the NSFNET
backbone rfp called for T1 ... in part because we already had T1 links
up and running production. Then some internal politics prevents us from
bidding on backbone RFP (nsf director wrote a letter to the corporation,
copying the ceo ... but that just aggravated the internal
politics). Turns out that the winning T1 backbone RFP response
... didn't actually install T1 links; they installed 440kbit/sec links;
and somewhat to meet the letter of the RFP, installed T1 trunks with
telco multplexors (to handle multiple 440kbit links on the T1
trunks). misc. old "nsfnet" related email from the period
http://www.garlic.com/~lynn/lhwemail.html#nsfnet

--
virtualization experience starting Jan1968, online at home since Mar1970

z/OS, TCP/IP, and OSA

dboyes@SINENOMINE.NET (David Boyes) writes:
8232 (a channel attached PC/AT that came with a Ungermann/Bass 10mbit
Ethernet card that jammed easily on networks with lots of collisions)
also genned as a CTC

one of the reasons it (and some of the others) was so slow (and used a
lot of the processor) ... was that it wasn't a *real* tcp/ip
device/router ... it was a lan device ... aka the mainframe tcp/ip code
had to do the lan/mac header packaging of the ip-packet ... before
sending it down the channel interface (aka the box just acted as
physical interface between the channel and the lan ... w/o any tcp/ip
code).

some of the other boxes were real tcp/ip devices/routers ... aka
mainframe could send ip-packet over the channel interface ... and the
outboard box did the appropriate routing and added appropriate media
headers (as needed).

as an aside ... the original vm (and mvs with vm functional simulation)
product was done in vs/pascal ... and didn't have any of the buffer
overflow vulnerabilities that are frequently endemic in c-language
implementations.

--
virtualization experience starting Jan1968, online at home since Mar1970

PIN + token is multi-factor authentication .... where PIN is
countermeasure to lost/stolen token. Nominally, multi-factor
authentication is considered more secure assuming that the different
factors have independent vulnerabilities.

In the shared-secret (PIN) model. a unique PIN/password is required
for every security domain (as countermeasure to cross-domain
attacks). As a result, the number of PIN/passwords have exploded
(compared to 40 years ago) ... exceeding the capacity for most human
memories. One study of PIN-debit cards found 1/3rd had the PIN written
on them (because of the stress that the exploding number of
PIN/passwords had placed on human memories). That invalidates the
assumption about multiple, different factors having independent
threat/vulnerability

In the skimming world, a compromised POS device can skim both the
static magnetic stripe information and the PIN at the same time
... also invalidating the assumption about multiple, different factors
having independent threat/vulnerabilities.

In the consumer-centric scenario ... the same (or very small number
of) tokens is used in large number of different environments. The
result is that there is drastically reduced number of PINs that have
to be remembered ... mitigating the threat of people being forced to
write the large number of different PINs on their large number of
different devices/tokens. Also, with the PIN being validated by the
token (as opposed to over the network) .... it is no longer a
shared-secret ... just a secret (not exposed to the different
security domains and doesn't have any requirement for
transmission). Since there isn't a shared-secret PIN (purely PIN
secret between the token owner and the token) ... there is no
institutional PIN management requirement.

Also, in the consumer-centric scenario ... for the same device to
operate in ALL environments ... the same token has to operate w/o
PIN (single factor, something you have authentication) for low-value
transactions (say transit-turnstile) ... and w/pin (multiple factor
authentication) for higher-value transactions.

In the x9.59 security proportional to risk scenario ... there are
higher risks with non-X9.59 magstripe as well as much lower risk x9.59
transactions (it becomes a risk management issue). Note however
... x9.59 works equally well with a smartphone as a wireless
transaction, a internet transaction, and/or a contactless card at POS
and transit turnstile.

There will always be a transition stage ... but since x9.59 is now
15yrs old ... any transition that had started back then would have
been pretty much over by now. Also the 15yr old x9.59 scenario has had
none of the exploits and vulnerabilities that most of the other
chip-based implementations have seen during the period since the
mid-90s.

Various offsetting costs for such a deployment is the elimination of
the skimming & data breach security measures that have to be deployed
(for the current paradigm). Also, in the consumer centric scenario
... there is enormous consumer convenience being able to collapse
everything to a single token (potentially something they already carry
... like a smartphone).

note there is separate scenario that applies the same x9.59 paradigm
for individual authentication tokens ... but, instead to
authenticating POS devices ... in fact, the x9.59 financial
transaction standard allows (that in addition to every transaction
having the individual's authentication) it is also allows for the
environment, where the transaction takes place (aka like a POS
terminal), also provide authentication (which could be an optional
requirement especially for higher-valued transactions). Such a feature
in each device can also be used for various kinds of protocols
requiring secret session keys (aka same chip in each individual
authentication token ... also in every POS terminal).

--
virtualization experience starting Jan1968, online at home since Mar1970

Patrick Scheible <kkt@zipcon.net> writes:
Right-wing attack radio myth. The Justice Dept. never minded banks
not lending to bad risks. Bankers lent the junk mortgages because
the bonus system rewarded such lending.

big part was that unregulated loan originators ... which had little
attention in the past ... because they weren't a big part of the economy
... having very little access to money for lending.

that all changed when the unregulated loan originators found they could
"securitize" the loans as toxic CDOs, pay the rating agencies for
"triple-A" ratings (when both the loan originators and the rating
agencies knew they weren't worth triple-A ... from fall2008
congressional testimony) ... and sell-off every loan they could make,

they no longer had to care about borrowers qualification and/or loan
quality ... since they were immediately unloading the loans at premium
prices (and now had tens of trillion dollar market for their toxic CDOs)
... only thing limiting them was how large and how fast they could make
loans ... supposedly something like $27T worth of toxic CDOs were cycled
thru this mill.
Evil Wall Street Exports Boomed With 'Fools' Born to Buy Debt
http://www.bloomberg.com/apps/news?pid=newsarchive&refer=home&sid=a0jln3.CSS6c

no-documentation, no-down payment, interest-only, 1% adjustable mortages
became quite popular with speculators ... and the unregulated loan
originators could care less ... the speculators drove up both the size
of the mortgages (with speculation bubble) and the frequency of the
mortgages (flipping before the mortgages adjusted) ... both were the
only factors that the unregulated loan orginators cared about anymore.

with real-estate inflation running 15-30% in various parts of the
country (further fueled by the speculators) ... speculators might make
2000% ROI with a no-documentation, no-down payment, interest-only, 1%
adjustable mortgage (at least until the bubble burst). there were
supposedly joke in the manhatten area, during the height of the bubble
about it being something like musical chairs ... and who would be left
holding the stuff when the music stopped (aka the unflipped mortages and
the toxic CDOs).

The fees, commissions, and bonuses related to the toxic CDOs were
enormously attractive to individuals ... that even knowing that the
toxic CDOs could take down their institution ... their personal
compensation more than overcame any worries they might have about the
institutions, economy, and/or country.

Part of the repeal of Glass-Steagall by GLBA ... allowed the too big to
fail banking institutions to have an unregulated investment banking arm
deal in toxic CDOs (that had started out with unregulated loan
originators) ... carrying them off-balance ... so they didn't show up on
the books of the regulated bank (even if the risk could take down the
whole institution) ... aka the loan originators and the toxic CDO
transactions managed to mostly skirt all the regulation associated with
normal bank lending and mortgages (which had revolved around regulated
depository institutions using deposits as source of funds for lending;
aka unregulated loan originators weren't using deposits as source of
funds, and unregulated investment banking arms were buying toxic CDOs).

Supposedly at the end of 2008, just the top four too-big-to-fail financial
institutions were carrying $5.2T worth of toxic CDOs off-balance. That
was part of the problem with the TARP legislation as originally passed
for purchasing "troubled assets" ... the amount of appropriated funds
would have barely made a dent in just those four institutions' $5.2T ... to
say nothing of the total amount of "troubled assets".

z/OS, TCP/IP, and OSA

PaulGBoulder@AIM.COM (Paul Gilmartin) writes:
At some point the TCP/IP stack must pass the address of an input
buffer to the network interface. Can an oversize packet overflow
that buffer? Or does the channel program prevent that and provide
a Length Indication?

the issue in most mainframe programming paradigms and several
programming languages ... lengths were explicit constructs ... and while
it was possible to have programming errors that result in buffer length
problems ... they have been relatively rare compared to C-language
environment.

C-language programming has had convention of implicit lengths based on
the data in the buffer (aka "null" used to signal end of string) ...
which has contributed to programming style that ignores and/or forgets
about actual lengths ... resulting in huge number of buffer length
related problems in C-language applications ... aka it is very
hard/difficult to *NOT* have buffer length problems in C environment,
comparable to the difficulting of having buffer length problems in
many of these other environments (in C, you have to work really hard
to not have problem; in these other environments ... it usually takes
a lot of work to have a problem).

a new kindle arrived last friday. on linkedin this morning, I tripped
across recommendation for "The Leader's Guide to Radical Management:
reinventing the Workplace for the 21st Century [Kindle Edition]"
http://www.amazon.com/exec/obidos/ASIN/B0043M4ZPW/

early in the "book" ... it has composite/example description of a few
individuals, as examples of what is wrong with today's work
environment. one of them is loan originator and how they were not
required to care about applicant's qualifications and/or quality of the
loan ... just required to make as many loans as fast as possible (since
they would be immediately securitized and sold off ... they weren't
suppose to care whether the applicant would ever pay off the loan or
not).

small piece from above ...
If Ben had paused to ask himself what he was doing writing loans that
had little hope of being repaid, he might have replied ...

... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

Fed Divided on Move to Buy U.S. Debt
http://www.nytimes.com/2010/09/01/business/01fed.html

from above:
The Fed used up its main policy tool by lowering short-term interest
rates to nearly zero in late 2008. So it bought about $1.4 trillion in
mortgage-linked securities and debts owed by entities like Fannie Mae
and Freddie Mac, from January 2009 to March 2010, in an effort to ease
credit and push down long-term rates.

jmfbahciv <See.above@aol.com> writes:
The repeal of the GS law created the loop hole so that banks could
issue loans for those who could not afford the mortgage payments
becuase the bank would transfer the loan to something else and
get it off its books. This became the norm which resulted in
the mess, once everyong saw they could do this. Why do you
think barney Frank protected FannieMae and FreddyMac from
realistic accounting? the goal was to "give" the poor
house ownership. AFAICT, he and his ilk are still promoting
this.

Repeal of Glass-Steagall didn't change how regulated banks made loans and
mortgages. Repeal of Glass-Steagall allowed regulated banks to play in
the wall-street (investment banking) triple-A rated, toxic CDO
transaction frenzy. One of the NY numbers, was that wall-street
financial bonuses spiked over 400percent during the period (the frenzy
had already started in 2002):

from above:
Here's a staggering figure to contemplate: New York City securities
industry firms paid out a total of $137 billion in employee bonuses from
2002 to 2007, according to figures compiled by the New York State Office
of the Comptroller. Let's break that down: Wall Street honchos earned a
bonus of $9.8 billion in 2002, $15.8 billion in 2003, $18.6 billion in
2004, $25.7 billion in 2005, $33.9 billion in 2006, and $33.2 billion in
2007.

... snip ...

... and there has been a lot of activity since the bubble burst trying
to prevent bonuses from returning to pre-spike levels. There was also
report, that the aggregate size of the industry (as percent of GDP)
tripled during the bubble ... with no obvious added benefit to the
economy or the country (and lots of on-going efforts to keep things from
returning to pre-bubble levels).

It wasn't so much Barney Frank protecting Fannie & Freddy ... but all of
congress ... there was CBS report (at the start of the century and
before the mess started) that they would put on (lobbiest) retainer,
everybody that had any relation to congress (former congressmen,
staffers, friends, relatives, etc) and had more lobbiests than
employees. In some sense, Fannie/Freddy is small potatoes and
obfuscation compared to where the majority of the activity went on.

securitizing loans into CDOs gave unregulated loan originators a
(limited) source of funds for making loans (an alternative to
traditional loans from regulated banks using deposits as source of funds
for making loans). CDOs had been used during the S&L crisis to obfuscate
the underlying value ... but there wasn't a big market for toxic CDOs
... and therefor the amount of money was limited ... and therefor also
the number of loans that could be made were limited.

it was in the past decade when they discovered that it was possible to
pay the rating agencies for triple-A ratings on toxic CDOs (when both
the sellers and the rating agencies knew that they weren't worth
triple-A) that the whole thing exploded. unregulated loan originators
now had effectively an unlimited source of funds (from the triple-A
rated toxic CDO market). Furthermore, being able to get triple-A rating,
regardless of the actual value ... eliminated any reason to care about
the borrower's qualifications or loan quality ... checking on such
things just slowed down the process; cranking out loans as large as
possible and as fast as possible was the only limitation on how much
money they could earn (since every loan they did would get packaged and
given triple-A rating).

There seemed to be significant fees, commissions, and bonuses for
everybody even touching triple-A rated toxic-CDO transactions.

FannieMae and FreddyMac were actually rather late to buying triple-A
rated toxic-CDOs ... along with retirement funds and everybody else
participating in the frenzy.

The thing that GLBA did with repeal of Glass-Steagall, was that it
allowed regulated depository institutions to have unregulated investment
banking arms. For all the individuals involved, it was a lot more
(personally) profitable to deal in triple-A rated toxic CDOs ... than
normal mortgages ... and the unregulated investment banking arms could
buy up enormous amounts of triple-A rated toxic CDOs (with all the
players getting fees, commissions and bonuses). As previously mentioned,
at the end of 2008, the four largest too-big-to-fail regulated
depository financial institutions were carrying (off-balance) $5.2T in
these toxic CDOs (much larger than FannieMae and FreddyMac had on the
books).

The point of Glass-Steagall was to provide safety and soundness to
deposits at regulated banks ... limiting the amount of risk that a
regulated bank could place deposits. Repeal of Glass-Steagall allowed
regulated banks (via unregulated investment banking arms) to participate
in enormously risky activity (like dealing in triple-A rated, toxic
CDOs) ... placing the institution and deposits at enormous risk.

Access z/OS 3270 TSO from "smartphone"?

timothy.sipples@US.IBM.COM (Timothy Sipples) writes:
In terms of CPU burden, sure, there's CPU burden incurred *somewhere*, and
CPU burden is never free anywhere. My employer -- the one that I do not
speak for -- has neatly solved that financial problem at least in this
particular mainframe use case and within its sphere.

jmfbahciv <See.above@aol.com> writes:
The repeal of the GS law created the loop hole so that banks could
issue loans for those who could not afford the mortgage payments
becuase the bank would transfer the loan to something else and
get it off its books. This became the norm which resulted in
the mess, once everyong saw they could do this. Why do you
think barney Frank protected FannieMae and FreddyMac from
realistic accounting? the goal was to "give" the poor
house ownership. AFAICT, he and his ilk are still promoting
this.

remember ... at the time of GLBA and nearly all of the mess ... which
party was in control of congress?

the congressional party in power were the ones that put together GLBA
and passed it ... with little or nothing from the other party
(54-44). however, the folklore was the president was going to veto it
... and so you see various things being added to GLBA to bring on board
nearly all of congress ... making the final vote "veto proof" (90-8)

GLBA didn't change how regulated banks used deposits for making loans
and mortgages. However, repeal of Glass-Steagall allowed the
institutions (via unregulated investment banking arms) to play in the
triple-A rated toxic CDO frenzy ... piling up huge amount off-balance
... and making the whole institution insolvent. Their significant
particpation in the triple-A rated toxic CDO frenzy ... also provided a
large percentage of the funds that fueled the unregulated loan
originators.

Much bigger than the Fannie/Freddy issue was that congress during the
last decade wasn't actively requiring that SEC actually do anything
(and/or meddling in SEC taking action). The major enabler for the whole
financial mess and the triple-A rated toxic CDO frenzy was being able to
pay the rating agencies for triple-A ratings ... something that the SEC
should have stepped into.

In the wake of ENRON, congress had passed Sarbanes-Oxley ... but it
seemed to have little actual effect ... including supposedly having SEC
taking much larger role ... even with the rating agencies.

As part of SOX, the only thing that SEC actually seemed to do about the
rating agencies was produce this report:

maus <greymausg@mail.com> writes:
There is the theory that Greenspan let the lunatics in charge,
after the Internet Bubble, to 'reenergize' the US economy.

not that it wasn't real ... but somewhat obfuscation and misdirection
from the underlying triple-A rated, toxic CDOs ... were the CDS
... being able to place a bet on nearly anything ... including entities
packaging triple-A rated, toxic CDOs, selling them off, and then betting
that they would fail.

It was somewhat Greenspan in conjunction with people behind GLBA. The
CDS have cost a couple hundred billions ... but the toll that the
triple-A rated toxic CDOs have had ... runs more to large trillions.
Repeal of Glass-Steagall allowed regulated depository financial
institutions have unregulated investment banking arms play in the
triple-A rated toxic CDOs ... effectively making the institutions
insolvent (if everything was required to come back on the books).

from above:
He played a leading role in writing and pushing through Congress the
1999 repeal of the Depression-era Glass-Steagall Act, which separated
commercial banks from Wall Street. He also inserted a key provision
into the 2000 Commodity Futures Modernization Act that exempted
over-the-counter derivatives like credit-default swaps from regulation
by the Commodity Futures Trading Commission. Credit-default swaps took
down AIG, which has cost the U.S. $150 billion thus far.

... snip ...

Commodity Futures Modernization Act was also implicated in Enron ...

Gramm and the 'Enron Loophole'
http://www.nytimes.com/2008/11/17/business/17grammside.html

from above:
Enron was a major contributor to Mr. Gramm's political campaigns, and
Mr. Gramm's wife, Wendy, served on the Enron board, which she joined
after stepping down as chairwoman of the Commodity Futures Trading
Commission.

from above:
A few days after she got the ball rolling on the exemption, Wendy
Gramm resigned from the commission. Enron soon appointed her to its
board of directors, where she served on the audit committee, which
oversees the inner financial workings of the corporation. For this,
the company paid her between $915,000 and $1.85 million in stocks and
dividends, as much as $50,000 in annual salary, and $176,000 in
attendance fees, according to a report by Public Citizen ...

from above:
That same year Greenspan, Treasury Secretary Robert Rubin and SEC
Chairman Arthur Levitt opposed an attempt by Brooksley Born, head of
the Commodity Futures Trading Commission, to study regulating
over-the-counter derivatives. In 2000, Congress passed a law keeping
them unregulated.

... snip ...

aka Wendy Gramm fairly quickly replaced Born (as chairwoman of
Commodity Futures Trading Commision) before stepping down (after her
husband's exemption in the 2000 Commodity Futures Modernization Act)
to join Enron's board (where she was on the audit committee).

Do we really need to care about DNS Security?

We had been called in to consult with a small client/server startup
that wanted to do payment transactions on their server; they had also
invented technology called "SSL" they wanted to use; the result is now
frequently called "electronic commerce". As part of that activity, we
had to do walk-thru/audits of some of these new operations calling
themselves "Certification Authorities", that were issuing "digital
certificates" in support of "SSL".
http://www.garlic.com/~lynn/subpubkey.html#sslcert

Big motivation for "SSL" was to overcome some integrity issues in the
DNS infrastructure. However, the walk-thru of the CAs, showed that
they were requiring SSL certificate applicants provide a lot of
information ... that was then checked against the domain name owner
information on-file at the DNS infrastructure. Improving the integrity
of the DNS infrastructure was in the interest of the CA industry (as
countermeasure to compromised domain name owner information resulting
in providing SSL certificate to the wrong entity). However, improving
the integrity of the DNS infrastructure also mitigates the whole
justification for SSL & SSL certificates.
http://www.garlic.com/~lynn/subpubkey.html#catch22

Disclaimer: person responsible for DNS was at the science center in
the early 70s ... at the same time I was.

--
virtualization experience starting Jan1968, online at home since Mar1970

there have been some references that the investment bankers behind a
lot of the junk bonds in the S&L crisis (left no fingerprints)
... where then involved in the internet bubble ... running
venture-capital/IPO mills (invest couple million, two years later have
IPO at a couple or couple hundred billion, it was actually beneficial
to then have the startup fail, since it left the market wide-open for
the next startup IPO), ... and then went on to play major roles in
the more recent toxic CDOs and CDS activity.

On Thu, Sep 23, 2010 at 2:14 AM, O'Brien, Dennis L
<dennis.l.o'brien@bankofamerica.com> wrote:
I heard from a couple of performance people at SHARE that we should have
20% to 25% of the total storage in an LPAR configured as expanded
storage. Naturally, that's a guideline and the proper amount varies by
workload. What should I look at to determine if we have enough expanded
storage? We use Velocity's ESALPS suite. The systems that I'm most
concerned about have a Linux guest workload. One of them is all WAS,
and the other is a mix of WAS, Oracle, and some other things.

I've heard that WAS isn't the best choice for System z, but that's not
the focus of my concern. We have the workload that we have, and I just
want to make it run as well as it can.

expanded store was originally done for 3090 because of physical
packaging problems ... it was not possible to locate all the memory
they needed for configuration within the latency of the standard
memory bus ... so they created the expanded store bus that was wider &
longer ... and used software control to move 4k pages back&forth
between regular storage and expanded store. a synchronous instruction
was provided for moving the data back&forth.

the expanded store bus was also used to attach HIPPI (100mbyte/sec)
channel/devices ... since the standard 3090 i/o interface couldn't
handle the data-rate. However, since bus didn't support channel
programs ... there was a peculiar (pc-like) peek/poke convention used
(i.e. i/o control was done by moving 4k blocks to/from special
reserved addresses on the bus).

effectively, expanded store paradigm is used to partition real storage
into different classes .... however, going back at least 40yrs
... there is large body of data that shows that single large store is
more efficient than partitioning the same amount of storage (assuming
that there aren't other storage management issues/shortcomings).

the simple scenario is 10000 storage pages and 10000 expanded storage
pages ... all occupied; when there is requirement for page that is in
expanded storage, it is swapped with a page in regular storage
(incurring some software overhead). The alternative is one large block
of 20000 pages ... all directly executable ... and doesn't require
swapping any pages between expanded store and regular store.

One of the defficiencies is dealing with application and/or operating
systems that perform their own caching/paging algorithm using some
sort of LRU mechanism (i.e. replacing their own pages/records using
some approximation to least-recently-used). This is characteristic of
large DBMS infrastructures that manage records in their own cache as
well as operating systems that support virtual memory. Their is a
pathological scenario if the virtual operation doesn't have all its
own dedicated storage (like in LPARs); VM will be managing virtual
pages using an LRU methodology (least-recently-used pages are the ones
selected for replacement) ... at the same time the virtual guest/DBMS
is also managing (what it thinks is real storage) with an LRU
methodology. If both are operating simultaneously ... it is possible
for VM to "replace" what it thinks is the least-recently-used page
(the virtual page least likely to be used) ... at the same time the
virtual guest/DBMS has decided that same page is exactly the next page
it wants to use.

Executing LRU replacement algorithms in a virtual guest/DBMS ... where
its storage is also being managed via an LRU replacement algorithm,
... can invalidate the assumption underlying LRU replacement
algorithms ... that the least-recently-used page is the least likely
to be used (a virtual guest/DBMS ... doing is own LRU algorithm is
likely to select the least-recently-used page as the next page most
likely to be used).

jmfbahciv <See.above@aol.com> writes:
Doesn't it look like the FFs are the pot where all the debt is
ending up in?

We still don't know how big a mess this medical insurance edict
is going to create. I don't see how Congress can channel that
debt into the FFs.

(fannie/freddie started playing in toxic CDOs ... but less than the
$5.2T that were in just the four largest too-big-to-fail financial
instituations (that was being carried off-balance). Lots of the stuff in
the news (like frannie/freddie) seems to be some obfuscation and
misdirection away from where the core of the problem was ... aka there
are lots of problems at fannie/freddie which need fixing ... but it is
something like sacrificial lamb being thrown to the wolves ... when
those responsible for the majority of the problems get away.

Supposedly the original purpose for TARP $700B was to buy up the stuff
... but it looked like when they found out how really large the problem
it was ... they started using TARP money in other ways. Recent reports
are that Federal Reserve has bought up $1.4T in toxic CDOs ... something
that doesn't require congressional approval/appropriation

from above:
Between 2004 and 2006, when subprime lending was exploding, Fannie and
Freddie went from holding a high of 48 percent of the subprime loans
that were sold into the secondary market to holding about 24 percent,
according to data from Inside Mortgage Finance, a specialty
publication. One reason is that Fannie and Freddie were subject to
tougher standards than many of the unregulated players in the private
sector who weakened lending standards, most of whom have gone bankrupt
or are now in deep trouble.

During those same explosive three years, private investment banks -- not
Fannie and Freddie -- dominated the mortgage loans that were packaged
and sold into the secondary mortgage market. In 2005 and 2006, the
private sector securitized almost two thirds of all U.S. mortgages,
supplanting Fannie and Freddie, according to a number of specialty
publications that track this data.

... snip ...

lets say majority of problem was symbiotic relationship between two
groups of preditors ... unregulated loan originators and those that dealt
in (triple-A rated) toxic CDOs ... with some amount of plain greed from
real estate speculation thrown in.

that goes along with the industry that dealt in the toxic CDOs, during
the bubble/mess, had bonuses spike by more than 400% and the size of the
industry (as percent of GDP) triple (with no obvious benefit to the
economy or the country; in fact, just the opposite) ... and there is now
all sort of resistance to having things return to pre-bubble levels.

Central vs. expanded storage

Kris Buelens wrote:
There is handshaking between Linux and VM, and even more than one flavor....

The fact that z/VM still likes to have some expanded storage is that the
management of central and expanded are different:
For expanded, CP has a time stamp and know exactly how old each page is.
For central storage there only is the reference bit, thus CP can only know
if the page was referenced since the last scan.

least-recently-used approximation replacement (whether used in the
last scan or not) ... is based on assumption that it is predictor of
probability that the page will be needed in the near future.

as things age past a certain point ... differences in their age (since
last used) becomes less reliable differentiator (as to higher or lower
probability of being used in the near future) ... and some sort of
psuedo-random can actually outperform strict ordering (some of these
are scenarios where LRU devolves to FIFO ... and random performs
better than straight FIFO).

in the 70s, there were a number of places that looked at multiple bits
... effectively one per scan, possibly one hardware bits and one or
more software bits, where RRB becomes more like logical shift
instruction.

One of the issues is if it takes too long to do a complete scan ...
then there is little differentiation being made between pages.
Splitting memory into storage and expanded storage ... makes regular
storage smaller and therefor the scan goes faster. Having multiple
bits (more history) also tends to scan going faster ... since the
additional history tends to require scan to look at more pages each
time.

Again from the early 70s, another approach is to offset the testing of
the reference bit from resetting the reference bit ... say by 1/2 or
1/4 the number of pages. This gave the name to "clock" in the early
80s i.e. two "hands" rotating around storage pages ... one resetting
and the other testing ... rather than a single "hand" resetting and
testing simultaneously; while I had done something similar in the late
60s and early 70s ... "clock" was a stanford phd in the early 80s.

Besides doing VM, GML, bunch of online & conversational stuff, the
science center had done a lot of work on performance monitoring,
workload & configuration profiling, and stuff that would turn into
capacity planning. This included various kinds of system simulators
and analytical modeling. One of the system simulators included using
instruction/storage traces for simulating variety of page replacement
algorithms ... including "exact LRU" (aka maintaining exact LRU
ordering of *every* page based on each & every reference). In the
early 70s, I had come up with a variation on clock which would always
beat "exact LRU" (coming closer to Belady's "OPT" ... given fore
knowledge of program execution, it would always choose the page for
replacement that resulted in the fewest total page faults).

--
virtualization experience starting Jan1968, online at home since Mar1970

Really dumb IPL question

dba@LISTS.DUDA.COM (David Andrews) writes:
Perhaps TCPIP autolog would do the trick as well? Does your product run
all the time?

I had originally created the autolog command for automated
benchmarking ... near the end of the system boot/ipl process, it would
autolog a generic id (*autolog1*). For benchmarking, *autolog1* would
have script that autolog'ed the benchmarking ID ... which had a script
controlling which processes to autolog and which synthetic workload
each process should run. At the end of the benchmark, it would update
the script for the next benchmark, and do an auto shutdown/reboot
... which would invoke the next benchmark. This would be repeated as
long as needed. For instance, for the final validation for my resource
manager, over 2000 automated benchmarks were run, taking 3 months
elapsed time.

we had collected lots of workload and configuration profile information
from both internal and customer datacenters ... and built graph
depicting normal range of operations. The first 1000 benchmarks were
manually defined to evenly cover the wide variety of configurations and
workloads (along with numerous benchmarks way outside normal observed
operations). The final 1000 was an intelligent automated program that
included sophisticated analytical system model ... which would "look
for" interesting operating points (based on all benchmark data todate;
it would also predict what the resulting benchmark should be and compare
the actual results with the predicted; interesting that it was used to
help calibrate actual operation as well as the analytical system model).

some of the items (including autolog) were picked up and released in
the standard vm370 release 3 ... and other features were packgaged and
released in my resource manager. misc. past posts mentioning my resource
manager
http://www.garlic.com/~lynn/subtopic.html#fairshare

for production operation, while cp67 got auto-reboot fairly early ... it
still required (operator) manual initiation of lots of the services
(like networking). autolog became the standard process for starting all
these standard processes (back then they were referred to as service
virtual machines ... the current nomenclature seems to be virtual
appliance).

One of the issues for virtual machine based system use for 7x24 online
timesharing operation ... both dedicated corporate use as well as
commercial timesharing service bureaus (the '60s & '70s version of
cloud computing) was cutting datacenter costs for offshift, usually
light useage. In the early days, machines were leased and monthly
datacenter charge was based on hrs taken from the cpu meter, which ran
when the cpu and/or any channel was active (and could continue to run
for 400ms after all activity had ceased). A very early trick in cp67 was
how to leave an active channel program (to accept incoming terminal
activity) ... and still allow the channel to go to sleep when no data was
actually being transferred. The later trick was increasing dark room
operation for lots of offshift (not only auto-boot, but also have all
the expected services to be back up and running). misc. past posts
mentioning early online commercial timesharing service bureaus:
http://www.garlic.com/~lynn/submain.html#timeshare

part of picking up lot of stuff I had been doing on 370 for release 3
... and then also releasing a lot of the rest of stuff as resource
manager ... was getting stuff back into the 370 product pipeline after
the failure of FS ... misc. past posts mentioning Future System effort
http://www.garlic.com/~lynn/submain.html#futuresys

i.e. FS was radically different from 370 and was going to completely
replace it; as a result lots of work had ceased on 370 related products
(I continued to work on 370 and made critical observations about reality
of what was going on in FS). Recently I ran into description of OS/VS2
SVS & MVS that were supposedly just on the "glide-path" to the FS
operating system ("OS/VS2 Release 3").

--
virtualization experience starting Jan1968, online at home since Mar1970

Paper tape

"Nicholas D. Richards" <nicholas@salmiron.demon.co.uk> writes:
One night while watching over a printer waiting for one of my listings
to come out, I was left seeing stars as the printer ran out of paper and
the cover shot up, catching my chin.

I remember one weekend (undergraduate in the 60s; they normally shutdown
operations 8am sat and didn't restart until 8am monday ... so I could
have the whole datacenter to myself for the weekend; 48hrs w/o sleep
made monday morning classes a little difficult) when I was doing some
work on os/360 on a 360/65 (actually 360/67 but running os/360 in
non-translate mode) ... and the system came to stop and bell rang
... and I couldn't get anything to happen.

finally after 45-60 mins ... I slammed the 1052-7 operator's console,
and the last sheet of paper fell out of the terminal.

turns out that there is a "finger" that senses the end/last of
(fan-fold) paper had passed (and signals an "intervention required"
interrupt to the system)... however, in this particular case, there
still enough friction that the last page of paper was still sitting in
the terminal. It wasn't until i slammed my fist into the console
... that last sheet of paper fell out ... and I realized I needed to
feed another box of paper into the console.

--
virtualization experience starting Jan1968, online at home since Mar1970

several of the participants were heavily involved in privacy issues
and had done a number of detailed, in-depth privacy surveys. The
number one issue was identity theft ... mostly the subcategory
account fraud, where crooks harvest information from previous
transactions to perform fraudulent financial transactions (current
paradigm effectively uses static data for whatever authentication is
provided ... and therefor it is a form of replay attack).
http://www.garlic.com/~lynn/subintegrity.html#harvest

A major source for the harvested information has been various kinds of
data breaches ... which there seemed to be little or nothing being
done about. A big issue is normally institutions are motivated to
provide security & threat countermeasures when their assets are at
risk ... however, in this particular scenario ... the risk isn't to
the institution but to the customers (resulting in no motivation to
the institution). It was hoped that the publicity resulting from the
data breach notification would provide some motivation to institutions
to take countermeasures.

Earlier, we had been called in to consult with a small client/server
startup that wanted to do payment transactions on their server; they
had also invented this technology called SSL they wanted to use; the
result is now frequently called electronic commerce.
http://www.garlic.com/~lynn/subnetwork.html#payment

Somewhat as a result, in the mid-90s we were asked to participate in
the x9a10 financial standard working group which had been given the
requirement to preserve the integrity of the financial
infrastructure for ALL retail payments. There were threat
& vulnerability studies of the various payment types, mechanisms,
and environments.

The result was the x9.59 financial transaction standard. One of the
things that x9.59 did was slightly tweak the existing paradigm and
eliminate the replay attack threat (i.e. did nothing to prevent data
breaches, skimming, evesdropping, etc ... just made the harvested
information useless to the crooks for performing fraudulent financial
transactions). In some sense, it can be considered to substitute
strong authentication & integrity for confidentiality. Note that the
major use of SSL in the world today is this earlier work we did on
electronic commerce for hiding account numbers and transaction
information. X9.59 eliminated the requirement to hide the information
and therefore also eliminates the major use of "SSL" in the world
today.
http://www.garlic.com/~lynn/x959.html#x959

--
virtualization experience starting Jan1968, online at home since Mar1970

of course the "treasury" TARP doesn't include what the FED also has been
doing. the baseline points out that the whole thing has been exceedingly
favorably to bankers and has done nothing regarding the issue of future
"responsibility" and "system risk".

there is a similar article about citi using DMCA to takedown internal
document discussing how favorable things have been to the banks.

and there have been a number recent articles that the FEDs actions have
also been unduely favorable to the institutions that carry major
responsibility for the whole mess. note that the baseline article
mentions the difficulty in actually using tarp funds for buying up toxic
assets ... it says nothing about what must be equal difficulty that FED
has bought up twice the toxic assets (as total appropriated in TARP).

the above article does point out that when similar things happened in
other countries ... the people behind TARP were recommending
significantly stronger measures ... than what they implemented when it
was in our own country.

i.e. the point of the (original) cal. data breech notification
legislation (as per above) was trying to motivate institutions to
protect financial transaction details ... since 1) there seem to be
little being done and 2) it was major source of harvested information
for crooks enabling fraudulent financial transactions (aka since
existing paradigm is subject to replay attack with static data).

other issues in the existing paradigm with respect to harvesting
static financial transaction information for fraudulent financial
transactions (independent of the issue whether the institutions are
being asked to protect stuff that doesn't put them at risk ... w/o the
outside legislation and/or regulations).

• security proportional to risk; the value of the information to the
merchant can be a few dollars (profit on the transaction) and to the
transaction processor a few cents. the value of the information to the
crook can be the credit limit and/or account balance. as a result the
crooks may be able to afford spending 100 times as much attacking the
system as the merchant/processor can afford spending to defend

• dual-use vulnerability; in many cases the current paradigm has the
static information needed by crooks for performing fraudulent
transaction is also the information that dozens of business processes
require at millions of locations around the world. as a result, I've
periodically observed that even if the world was buried under miles of
information hiding encryption, it still couldn't stop information
leakage.

... in fact, at the time of the cal. data breach notification work
... there seemed to be a tendency to have news related to fraud where
consumer had some control ... like lost/stolen cards ... but a
decided tendency to not publish information where the consumer had no
control; insider threats, data breaches, and some skimming. External
skimming situations where consumer might be able to notice something
amiss showed up in the news ... but situations where skimming devices
were installed during manufacturing tended to get much less press.

--
virtualization experience starting Jan1968, online at home since Mar1970

We had been called in to consult with a small client/server startup
that wanted to do payment transactions on their server; they had also
invented this technology called SSL they wanted to use; the result is
now frequently called "electronic commerce".

Somewhat as a result, in the mid-90s, we were asked to participate in
the x9a10 financial standard working group which had been given the
requirement to preserve the integrity of the financial infrastructure
for ALL retail payments. There were threat & vulnerability studies
of the various payment types, mechanisms and environments.

The result was the x9.59 financial transaction standard. One of the
things that X9.59 did was slightly tweak the existing paradigm and
eliminate the replay attack threat (i.e. did nothing to prevent data
breaches, skimming, evesdropping, etc ... just made the harvested
information useless to the crooks for performing fraudulent financial
transactions).
http://www.garlic.com/~lynn/x959.html#x959

Note that the major use of SSL in the world today is this earlier work
we had done on "electronic commerce" for hiding account numbers and
transaction information. X9.59 eliminated the requirement to hide the
information and therefor also eliminates the major use of "SSL" in the
world today.

it has been recognized from the beginning that the majority of devices
connecting to the internet have enormous vulnerabilities.

in the mid-90s, there were presentations by consumer dial-up online
banking operations about moving to the internet, motivated by
offloading the enormous consumer support costs of operating their own
proprietary dial-up environment (especially involved with supporting
serial-port devices ... aka the consumer serial-port dial-up
modems). at the same time, the commercial/business dial-up online
banking operations were claiming they would never move to the internet
(because of the enormous security vulnerabilitys of the typical
devices used to connect to the internet). More recently there have
been recommendations that businesses have a PC dedicated only to
online banking and *NEVER* used for anything else.

In parallel with the work on the x9.59 financial standard, there was
work on the EU FINREAD device ... basically extending the "end-point"
for performing financial operations to a hardened external device,
that provided its own secure display (of the transactions) and
required real human interaction to perform the operation. While x9.59
eliminated replay attacks (data-breaches, skimming, evesdropping,
etc) ... there were still vulnerability of compromised end-points
impersonating real human operation (which EU FINREAD standard would
address for online financial operations).
http://www.garlic.com/~lynn/subintegrity.html#finread

Unfortunately the early part of this century there were some ill-fated
token deployments that resulted in the industry retrenching from all
such security approaches. One was POS deployment involving tokens with
the YES CARD vulnerability (there was a quote from the period about
billions being spent to prove that chips are less secure than
magstripe).
http://www.garlic.com/~lynn/subintegrity.html#yescard

Another was deployment of tokens along with a serial-port interface
device (for online/internet operation) ... and the resulting consumer
support nightmare (apparently the ephemeral institutional knowledge
about serial-port support issues had evaporated in the very short
period since consumer dial-up online banking started moving to the
internet).

Is the United States the weakest link when it comes to credit card security?

From: lynn@garlic.com (Lynn Wheeler)
Date: 01 Oct, 2010
Subject: Is the United States the weakest link when it comes to credit card security?
Blog: Payment System Network

Part of the issue was that there was a rather large US pilot the early
part of this century ... but it was in the period of the YES CARD
vulnerability. At the time there was a presentation in the "ATM
Integrity TaskForce" meetings about the YES CARD which prompted
somebody in the audience to comment that they managed to spend
billions on chips to prove that they are less secure than
magstripe

Since then there have been various news items about the cost of such
transition in the US. The possibility is that in the wake of the yes
card scenario, there is worry that the actual cost might involve a
whole series of such roll-outs.

in the early part of this century, there were also a number of
safe/secure internet payment products that were being pitched to
merchants and getting very high acceptance. Then came the cognitive
dissonance when the merchants were told that the interchange rate for
safe/secure payment products would be higher than the highest fraud
interchange rate ... and it all sort of fell apart (merchants had
anticipated that since in the past, the interchange rate was
proportional to the fraud rate ... that the interchange rate would
drop with safe/secure payments rather than go the other way)

as to the control issue ... in the late 90s, the major internet
players offered to underwrite a ubiquitous roll-out of secure payment
infrastructure ... expecting to recoup the expense by increased uptake
of internet products. the major payment players didn't know what to
make of the offer ... attack of decision paralysis and severe anxiety
that it might result in loosing control of the payment business.

--
virtualization experience starting Jan1968, online at home since Mar1970

from above:
The number of reported data breaches have skyrocketed since numerous
states, including Pennsylvania and New Jersey, adopted new data breach
notification laws in 2005 and 2006.

... snip ...

We were tangentially involved in the (original Cal) state data breach
notification legislation. We had been brought in to help wordsmith the
electronic signature legislation and several of the parties were
heavily involved in privacy issues.
http://www.garlic.com/~lynn/subpubkey.html#signature

They had done detailed (citizen) privacy surveys and no. one issue was
"identity theft" ... significant part was form of "account fraud" as a
result of data breaches. One of the issues with regard to data
breaches ... is crooks use the information to perform fraudulent
financial transactions against the account holders ... i.e. there is
no direct threat against the institutions where the breaches occur
... and therefor they had much less motivation to take
countermeasures.
http://www.garlic.com/~lynn/subintegrity.html#harvest

i.e. the point of the (original) cal. data breech notification
legislation (as per above) was trying to motivate institutions to
protect financial transaction details ... since 1) there seem to be
little being done and 2) it was major source of harvested information
for crooks enabling fraudulent financial transactions (aka since
existing paradigm is subject to replay attack with static data).

other issues in the existing paradigm with respect to harvesting
static financial transaction information for fraudulent financial
transactions (independent of the issue whether the institutions are
being asked to protect stuff that doesn't put them at risk ... w/o the
outside legislation and/or regulations).

• security proportional to risk; the value of the information to
the merchant can be a few dollars (profit on the transaction) and to
the transaction processor a few cents. the value of the information to
the crook can be the credit limit and/or account balance. as a result
the crooks may be able to afford spending 100 times as much attacking
the system as the merchant/processor can afford spending to defend

• dual-use vulnerability; in many cases the current paradigm has
the static information needed by crooks for performing fraudulent
transaction is also the information that dozens of business processes
require at millions of locations around the world. as a result, I've
periodically observed that even if the world was buried under miles of
information hiding encryption, it still couldn't stop information
leakage.

... in fact, at the time of the cal. data breach notification work
... there seemed to be a tendency to have news related to fraud where
consumer had some control ... like lost/stolen cards ... but a decided
tendency to not publish information where the consumer had no control;
insider threats, data breaches, and some skimming. External skimming
situations where consumer might be able to notice something amiss
showed up in the news ... but situations where skimming devices were
installed during manufacturing tended to get much less press.

--
virtualization experience starting Jan1968, online at home since Mar1970

somewhat in its place, Fed Reserve has spent $1.4T buying toxic assets
(twice what was appropriated in TARP ... since the $700B would have
hardly made a dent in amount of toxic assets).
http://www.nytimes.com/2010/09/01/business/01fed.htm

There is seeming conflict behind some of the reasons given for not
using TARP to buy toxic assets and federal reserve going ahead and
buying toxic assets.

apparently Federal Reserve stepped in to help get them off the banks
books. Then the banks could use the $1.4T from the FED to payoff TARP
(and other things). When it first hit PIMCO bought several tens of
billions at 22cents on the dollar ... anybody guess whether FED paid
face value??

aka ... at 22cents on the dollar, the four too-big-to-fail
institutions would have failed.

large part of toxic assets are 100%/no-down in hot speculation markets
that were running 15-30% inflation. bubble bursts and values deflate
to pre-speculation levels (and not likely to return soon); maybe
50%. Better than 22cents on the dollar, but still enuf to take down
the four too-big-to-fail; rest is lots of obfuscation and
mis-direction.

Virtual Machines and TARP

There was brief note jan2009 that IDC would be helping treasury value
toxic assets (back when TARP was going to be used for purchases). IDC
was one of the early virtual machine commercial online service bureaus
in the 60s. then, they (and ncss) started moving upvalue into
financial information; in the early 70s, IDC bought the pricing
service division from one of the rating agencies

Selling pricing service division was possibly leading edge of rating
agencies not actually needing to value assets to give triple-A ratings
(to the toxic CDOs). Disclaimer, in the 60s, I interviewed with IDC
... and knew lots of the employees. It isn't that toxic assets are
that hard to value ... it is valid valuation wouldn't keep the large
holders of toxic assets from failing. As I mentioned before, we had
been asked to look at issues of securitized mortgages (CDOs) in the
late '90s.

Paper tape

cb@mer.df.lth.se (Christian Brunschen) writes:
There's also the big difference between understanding code and being
able to maintain or re-use it. So, even if you had tried very hard to
understand the code, if it's just badly designed and written it should
_still_ be thrown out and re-done.

sometimes you can start with well design code ... and it is the
maintainence over a period of years that turns it into spaghetti (or
maybe a septic tank).

they were doing dedicated, stand-alone testings ... with available
mainframes scheduled nearly 7x24. they had once tried MVS in the
environment (i.e. being able to do on-demand, concurrent testing of
several different testcells concurrently) ... but in that environment,
MVS had 15min MTBF before having to re-ipl/re-boot.

I decided to rewrite i/o supervisor so that it would never fail (so
they could do on-demand, concurrent testing of any number of
development testcells ... vastly improving productivity and thruput).

on of the side-effects of doing the rewrite ... was that it was also
more compact code, much shorter pathlength, and had more function
(besides never fail).

--
virtualization experience starting Jan1968, online at home since Mar1970

There is fairly major divide ... when the breach involves the
institution assets and risk to the institution ... and the most common
data breaches where the transaction details are used by crooks to
perform fraudulent financial transactions against consumer accounts
... and doesn't represent a direct threat/risk to most of the
institutions where the breach occurred.

Note that at the time of the cal. state data breach legislation, they
were also working on an "opt-in" personal information sharing
legislation ... which was then pre-empted by federal "opt-out" sharing
provisions in GLBA (this is the same legislation that repealed
Glass-Steagall). Since then there have been a series of federal data
breach legislation introduced (so far none passing) that generally
fell into two categories 1) roughly equivalent to the original
cal. state legislation and 2) data breach notification legislation
that would effectively pre-empt state laws and eliminate most
notification requirement (careful what you ask for).

As to the "opt-out" sharing provisions in GLBA, a couple years ago at
an annual national privacy conference in DC, there was a panel
discussion with the FTC commissioners and somebody from the back of
the room got up and asked if the FTC commissioners would be doing
anything about (even the weak) "opt-out" sharing. They claimed to be
involved in call center operations used by majority of financial
institutions and said that most of the people answering "opt-out"
calls had no provisions for recording who called and wanted to
"opt-out".

As to data breaches regarding financial transaction information that
can be used by crooks to perform fraudulent financial transactions
(effectively a kind of static data replay attack) ... as previously
mentioned the x9.59 financial transaction standard slightly tweaked
the paradigm and eliminated that risk ... aka did nothing about
eliminating data breaches, skimming, evesdropping, etc ... just
eliminated the risk of crooks being able to use the information for
fraudulent financial transactions.
http://www.garlic.com/~lynn/x959.html#x959

--
virtualization experience starting Jan1968, online at home since Mar1970

Is the United States the weakest link when it comes to credit card security?

From: lynn@garlic.com (Lynn Wheeler)
Date: 05 Oct, 2010
Subject: Is the United States the weakest link when it comes to credit card security?
Blog: Payment System Network

some of the differences with the late '90s approach may not be
immediately obvious.

traditionally, solutions are by parties that have vested interest
(profit motivation) in the solution and/or interest/profit in the
status quo.

the late '90s solution was to drastically improve the online threat &
vulnerability landscape by parties that viewed it as a cost & market
inhibitor ... an approach which appeared to scare many of the
traditional players (with profit/vested interest in the status quo)
... not the least of which were the traditional stakeholders being
worried that major disruptive paradigm change might make them
obsolete.

--
virtualization experience starting Jan1968, online at home since Mar1970

Rich Alderson <news@alderson.users.panix.com> writes:
The project collapsed when one of the applications programmers got a copy
of the DEC-10 Pascal compiler from 3M--which along with their mods still
had most of the comments in German from the port of the ETH compiler for
the CDC 6600. About the same time, the systems programmers obtained the
HITAC 8000 Pascal compiler from the Australian Atomic Energy Commision. I
got to read the sources for both, and probably still have 9-track tapes; I
*know* I still have a copy of VS Pascal from when it was a third-class
citizen from IBM.

VS Pascal started out in the Los Gatos VLSI tools group by two of their
people ... initially using metaware tools (metaware was used for a
number of projects related to VSLI design/support at Los Gatos). old
reference to TWS
http://www.garlic.com/~lynn/2004d.html#71 What terminology reflects the "first" computer language ?

One of the two then left ... went thru a number of startups, became VP
of software development at MIPs and then general manager of the SUN
business unit responsible for JAVA (after SGI bought MIPS).

The other hung around for awhile ... and I was trying to get him to do
(mainframe) C front-end for the pascal backend ... before he left to
join metaware. Then when Palo Alto group was working on BSD port to
mainframe ... I convinced them to subcontract the C compiler to
metaware. Before mainframe BSD shipped ... the group was redirected to
do a PC/RT BSD port instead (aka "AOS" written to the "bare metal")
... and that somewhat convoluted history is why "AOS" was done with
metaware C compiler.

VS Pascal were used for a number products ... including the mainframe
TCP/IP protocol stack. I attribute the difference between Pascal and C
... why that (mainframe pascal) implementation had none of the common
buffer length exploits common with TCP/IP stacks implemented in C.

There were other issues with that mainframe implementation ... namely
high processor overhead and performance. I did the changes to support
RFC1044 and in some testing at Cray research got channel media thruput
using modest amount of 4341 (about 500 times improvement in number of
instructions executed per bytes moved). misc. past posts mentioning
rfc1044 support
http://www.garlic.com/~lynn/subnetwork.html#1044

We left about the time of the corporate troubles in the early 90s.
Other consequences of the corporate troubles was lot of migration to
COTS vlsi tools ... which included tech. transfer of various internal
tools to external vendor. I've mentioned before that I had a consulting
contract to port one such 50k+ statement vs/pascal tool to other
platforms. It was when I found that many of the pascals on other
platforms had possibly been used for little more than student
instruction (and for one major target platform, the vendor had recently
outsourced development/support to a location 12 timezones away; some
relation to the term "rocket science").

The 10 Highest-Paid CEOs Who Laid Off The Most Employees

One of the other reports is GAO started doing audits of public company
financial filings ... which showed big uptick in fraudulent filings
&/or accounting errors ... even after Sarbanes-Oxley (apparently SEC
was doing little or nothing in this area).

One of the explanations given for the fraudulent filings was that it
significantly increased CEO bonuses and that even if the filings were
later corrected, the bonuses weren't reclaimed.

One semi-facetious choice was

1) sarbanes-oxley had no effect on public company fraudulent financial
filings
2) sarbanes-oxley encouraged public company fraudulent financial
filings
3) without sarbanes-oxley, all public companies would be making
fraudulent financial filings.

There were a number of articles in the mid/late 90s that large
corporations had lobbied congress for changes so that retirement
funds could be moved from liability to asset ... resulting in big
bottom line jump and massive spike in CEO bonus. The issue then
raised was in any possible bankruptcy ... would creditors have claim
on those funds.

I can imagine somebody writing a book in a couple decades drawing
parallels between the last 15yrs and the "robber baron" stories from
100 years earlier.

Two years ago there was wharton business school article that estimated
approx. 1000 are responsible for 80% of the current situation and it
would go a long way to improving things if the gov. could figure
something to do with those 1000.

There is folklore about a large corporation that went into the red in
the early 90s. Supposedly the top executives then spent the rest of
that fiscal year ... shifting expenses from the following fiscal year
into that year ... since for the purpose of the executive bonus plan,
once you are in the red ... it didn't make much difference how far
into the red things went. The expense shifting resulted in the
following fiscal year showing a small profit and the way the executive
bonus plan was written (as percent improvement over the prior year),
the executive bonus that year was more than twice as large as any
prior bonus ever paid (in effect, the executives earned more by the
company going into the red).

As mentioned upthread ... the purpose of the cal. data breach
legislation wasn't motivated by all data from the public ... it was
motivated by extensive consumer "privacy surveys" that their #1 issue
was (in the current retail payment paradigm) that data from previous
financial transactions can be used by crooks to perform fraudulent
financial transactions. The risk wasn't the leakage of the data
... the risk was the leakage of the data could result in fraudulent
financial transactions against their accounts.

The approach taken by the x9a10 financial standard working group
wasn't to continue to try and prevent such leakage (i.e. static data
used in a kind of replay attack) ... since the same data that the
crooks were after, was also the data that was needed in dozens of
business processes at millions of locations around the world. Instead,
the x9a10 financial standard working group developed a standard that
directly addressed the risk ... the fraudulent financial transactions
... making it immaterial whether the information leaked or not
(therefor eliminating the need to secure such data).
http://www.garlic.com/~lynn/x959.html#x959

A large part of the current problem with such data ... is not so much
the difficulty of securing such data ... it is securing such data
... so that there is absolutely no access to the data and
simultaneously allowing access to such data in dozen of business
processes at millions of locations around the world. In the current
paradigm ... just exposing a card for doing transaction at POS, is a
potential point of leakage.

--
virtualization experience starting Jan1968, online at home since Mar1970

for the fun of it ... old reference to mainframe pl.8 benchmark against
vs/pascal mainframe benchmark (on 3033; aka lots of optimization
technology went into pl.8 ... that was later propagated to other
compilers/languages)
http://www.garlic.com/~lynn/2006t.html#email810808

os/360 PL/I was huge, both compiler and run-time ... compiler had large
number of load modules and run-time had large number of routines.

i have memories of PL/I demonstration at univ. before it was released,
they brought a tape which was loaded to a 2314 disk pack ... and did
some number of demonstrations over a couple day period. then there was
some issue raised whether somebody at the univ. may have made an
unauthorized backup of the 2314 when nobody was looking.

the size of the os/360 PL/I contributed to development of subsets for
student instruction (most univ. couldn't afford the computer
time/resources it took to compile even a minimal student PL/I program)

and for some topic drift ...

this was in the period of the 23jun69 unbundling announcement ...
prompted by various litigation ... but the beginning of starting to
charge for application software (including languages), SE services, etc.
they did manage to make the case that kernel software should still be
free. misc. past posts mentioning unbundling
http://www.garlic.com/~lynn/submain.html#unbundle

above mentions that PL/M subset was implemented for Intel and used to
write CP/M ... as well as much of the software for 8080, 8085 & Z80
during the 70s.

up until unbundling, new SEs got much of their training by being part of
large group at the customer site ... sort of apprentice. after
unbundling, anybody doing anything at the customer site had to be
charged for. with unbundling, that "training" evaporated. This prompted
the initial HONE program ... several CP67 internal datacenters around
the country allowing online access to SEs in the branch office ... being
able to run/operate various operating systems in virtual machines. misc.
past posts mentioning hone
http://www.garlic.com/~lynn/subtopic.html#hone

--
virtualization experience starting Jan1968, online at home since Mar1970

Question: Why Has Debit Grown So Quickly?

If debit card is enabled for signature debit ... then whether or not
the card had always been used with PIN ... crooks can still skim the
magstripe and use it w/o PIN. A couple years ago there was article
that signature debit has 15 times the fraud of PIN debit.

there was an interesting article a few years ago claiming that debit
cards are less secure than credit cards. the issue was that early in
magstripe ... crooks were creating counterfeit credit card by
creating/guessing account numbers (according to known formula) and
generating magstripe. The countermeasure to that was the development
of card security code added to (credit) magstripes (and the attack on
security code was skimming where complete valid magstripe was
copied). following includes mention of magstripe managed out of los
gatos lab (where I had several offices and labs):
https://en.wikipedia.org/wiki/Magnetic_stripe_cardcard security code
https://en.wikipedia.org/wiki/Card_security_code

debit cards didn't require such card security code ... since the PIN
effectively provided equivalent countermeasure to counterfeit
magstripe based on formula generated accounts. However, when debit
cards were being enabled for signature debit (w/o PIN) ... all of a
sudden they were vulnerable to counterfeit magstripe with formula
generated accounts (not even having to skim a valid magstripe).

I think "check card" was an attempt at branding ... nearly all debit
cards appear to now be issued enabled for "signature debit" ... and it
takes special request to get a debit card that is not enabled for
signature. Some of the so-called "protection" for signature-debit
(and/or differences between signature and PIN) is institutional policy
and not subject to the same regulations as credit. debit article
https://en.wikipedia.org/wiki/Debit_card

above lists regulated card holder liability.

PIN has been considered as part of two-factor authentication (card as
something you have and PIN as something you know)
... countermeasure against lost/stolen card as well as well as some
forms of counterfeit magstripe.
http://www.garlic.com/~lynn/subintegrity.html#3factor

Two-factor authentication is assumed to be more secure ... given that
the different factors have independent attacks/failures.

With the proliferation of shared-secretsomething you know
authentication (PIN, passwords, etc), people frequently have large
scores of things to memorize ... which has exceeded breaking point of
most humans. As a result, there is greater use of writing down these
shared-secrets ... and one study found that 1/3rd of debit cards
have PINs written on them.
http://www.garlic.com/~lynn/subintegrity.html#secrets

Also, advances in technologies includes compromises to POS terminals &
ATM machines for skimming magstripe (initially used by crooks to
counteract the introduction of card security code). However, these
advances have also included being able to skim the PIN at the same
time as the magstripe is skimmed ... invalidating the assumption about
multiple factor authentication having independent attacks/failures.

some of debit card processing can be institutional policy ... some of
the branding dates back before the walmart/merchant antitrust
... history
http://www.inrevisacheckmastermoneyantitrustlitigation.com/history.php3

where debit cards with association "bug" can be signature debit and
transaction carried thru the association/credit networks (as opposed
to the debit networks).

part of the overhead was large number of modules so compile could fit
into minimum storage size ... resulting in large number of disk accesses
and many minutes to do a compile/load/execute even simple programs.

relative size/complexity increase of GCC hasn't kept pace with increase
in processor speeds and memory sizes ... so even program source that
increased by same ratio as GCC ... would tend to compile today in
seconds (rather than minutes).

--
virtualization experience starting Jan1968, online at home since Mar1970

Article mentions transaction redo to bring databases current and the
overhead imposed by ACID properties. ACID is courtesy of Jim and his
work creating TPC (after he left research ... when he tried palming
off a lot of dbms consulting on me; this is back in the days of
original sql/relational implementation). misc. past posts mentioning
system/r
http://www.garlic.com/~lynn/submain.html#systemr

In the celebration for Jim (after he disappeared), there was point
made that the ACID properties and formalizing transaction semantics
... went a long way to creating modern financial dataprocessing
... aka the integrity of ACID & formal transaction semantics getting
auditors to accept computer records.

back in the cp67 era ... I had done paged mapped interface for the CMS
file system, and later moved to vm370 and included it in the internal
system distributions that I supported (like csc/vm and later sjr/vm).
In the early 80s, it was easy to show that on identical configration &
3380s ... for moderate i/o intensive workload ... it had approx. three
times the thruput of the unmodified, non-paged mapped filesystem (shorter
pathlength, higher transfer rates, much better scaleability, etc). misc.
past posts mentioning work on paged mapped filesystem
http://www.garlic.com/~lynn/submain.html#mmap

it wasn't simply just page-mapped/non-page-mapped ... because of the
page-mapped I was able to do some fairly trivial filesystem enhancements
(some of which would have been significantly more difficult in a
non-page-mapped environment). Other things could have been done to the
base filesystem ... but weren't ... like contiguous allocation ... akin
to recent (linux) enhancement to EXT3 for EXT4 filesystem.

--
virtualization experience starting Jan1968, online at home since Mar1970

Big Iron — The Mainframe Story

Thanks it was interesting. What I got a big kick out of how IBM did
not mention their abandoment of education in the 80's and 90's. Their
2003 date was to late and a dollar short.

I also thought it was interesting as how they danced around some
topics saying a little bit but trying not to give the reason why IBM's
did such a fantastic job in making sure new systems were supposedly
more compatible with the old. Language Environment was not mentioned
as I am sure it is still an embarrassment to IBM.

What should be interesting the the years to come how good/bad IBM will
be maintaining compatibility with different OS's amd languages. I am
glass I am no longer full time into the beast as I suspect it will go
down hill because of new writers of OS's will not care about
compatibility the way the oldsters did.

lot of IBM leaving higher education was about the time of 23jun69
unbundling announcement ... and starting to charge for (application)
software, se services, etc (result of various litigation ... including
gov.) ... saw significant reduction in "education discounts".

IBM tried to come back in the early 80s (somewhat about the same time
some of the gov. examination was being reduced) with ACIS ... starting
to give large grants to lots of education institutions. Part of the
issue was trying to rampup/staff a new organization from scratch
... that had enormous pot of money; there were some jokes that transfers
from other organizations were people the other organziations wanted to
turf anyway.

lots of technology you see today came out of that period/funding ...
just is very little mainframe.

some of the pull back in the 90s was result of the corporate troubles of
the early 90s and corporation going into the red.

In the mid-80s, top executives were predicting that worldwide revenue
would double in a few years ... and there was huge build-out of
manufacturing capacity (anticipating that doubling) ... instead it went
in the opposite direction. In the mid-80s, it wasn't very career
enhancing to show effect of growing commoditizing of hardware and
resulting shinkage of profit margins on same (and predicting what needed
to be done about corporate expense structure).

--
virtualization experience starting Jan1968, online at home since Mar1970

CMS on MVS

PaulGBoulder@AIM.COM (Paul Gilmartin) writes:
Where did you put NUCON? IIRC, there's an EXTRN for NUCON, which
implies the possibility of relocating it and using an ADCON as a
base. But IIRC also there's so much hardcoded "USING NUCON,R0"
that relocation would be futile. Aren't the PSWs also mapped in
NUCON? Or does the CMS nucleus much care about that?

there is the approach that has simulation of CMS system functions
... as opposed to running the CMS kernel ... these would be akin to
CMS simulation of os/360 system functions for running os/360
applications.

There was joke in the late 70s that the 64K bytes of os/360 simulation
code in cms was much more cost/effective than the MVS 8mbyte os/360
simulation ... and as been alluded to in previous posts, there were
very large, major internal MVS applications that started pushing
system boundaries and had to be ported to CMS (as way of getting
around 7mbyte application restriction and other limits). In a couple
cases, there was 12kbytes of additional simulation written to include
support for os/360 features not already supported by CMS.

note that the simulation of vm370 (diag) functions in MVS was method
used for getting vm370 tcp/ip support runing in MVS environment ... for
original MVS tcp/ip product.

I know there have been lots of comments about the efficiency of that
product ... but the base wasn't much better in vm370. However, early on,
I added the rfc1044 support to the base product and in some tuning tests
at cray research, got approx 500 times improvement in the number of
instructions executed per byte moved. misc. past posts mentioning
rfc1044 support
http://www.garlic.com/~lynn/subnetwork.html#rfc1044
--

rfochtman@YNC.NET (Rick Fochtman) writes:
My 2 cents worth: the "cheap DASD" doesn't live up to the reliability
standards that IBM demands for z/OS. Stop and think, really hard,
about the demands on z/OS DASD storage, as opposed to the "standards"
you enjoy with your PC DASD. How many of your PC's stay up, with DASD
spinning, on a 24/7 basis, for several years without problems?

commodity priced disks drove MTBF past 800,000 hrs, in part because of
service & return costs would eliminate slim margins. nearly all hardware
is now nearly identical ... differentiation is some electronics ... and
RAS coming from various replication and RAID technologies.

All current DASD is CKD emulation on top underlying fixed block disk
hardware ... that is close to same across industry.

I've paid more for things like replicated power supplies and
hot-pluggable ... but that was the frame ... there was essentially no
difference in the underlying disk hardware.

By the mid-80s ... large variety of platforms were moving past hardware
as major contributor to outages ... it was software, human error ...
and environmental ... when we were out marketing our HA/CMP product I
coined the terms geographic survivability and disaster survivability
to differentiate from disaster/recovery. When I was asked to write a
section in the corporate strategic continuous availability document the
section was pulled because of complaints from both Rochester and POK
(about not being able to meet the criteria at the time). misc. past
posts mentioning availability (and/or corporate continuous availability
strategy document)
http://www.garlic.com/~lynn/submain.html#available

old posts mentioning CKD dasd, multi-track searches and FBA (including
being told I had to show business case to justify $26M cost for MVS
documents & education ... even if I provided MVS fully integrated and
tested FBA support)
http://www.garlic.com/~lynn/submain.html#dasd

ted@loft.tnolan.com (Ted Nolan <tednolan>) writes:
IBM 360/370 assembler was the *second* language we encountered, and that
class was generally considered to be the "weed out" class for the CSCI track.

I had 2hr introduction to fortran in the spring and then was hired that
summer to port 1401 MPIO to 360/30. Univ. had 709 running ibsys
tape-to-tape with 1401 handling reader->tape and tape->printer/punch
(with manual moving tape between 1401 tape drive and 709 tape drive).

As part of transition for replacing configuration with 360/67 ... the
1401 was replaced with 360/30 ... which could run MPIO in 1401 hardware
emulation. Apparently having me rewrite in 360 assembler was an exercise
in getting familiar with 360. Datacenter would shutdown 8am sat and
wouldn't reopen until 8am monday ... I got to have the whole datacenter
to myself for the 48hr period (continued the following year ... 48hrs
w/o sleep made attending monday classes a little hard).

I got to design and implement my own interrupt handlers, device drivers,
scheduling, storage management, error recovery, etc. It eventually was
2000 cards and basic took almost 30 minutes elapsed time to assemble. I
had conditional assemblies that could either run stand-alone (with my
own device drivers and interrupt handlers) or under os/360 using
open/close, get/put, and DCB macros. The os/360 version took almost an
hour to assemble since each DCB macro took 5-6 minutes elapsed time (you
could tell when it was doing DCB macro by pattern of lights on 360/30
front panel).

Later assemblers got much faster ... the folklore was the early
assembler was so slow because they told the person writing the op-code
lookup that they only had 256 bytes for the implementation. He
supposedly took that as meaning code+data ... so would reread the
op-code lookup table from disk for each statement.

When configuration was replaced with 360/67 ... running as 360/65 with
os/360 ... student fortran jobs that ran approx second elapsed time on
709 tape-to-tape ... were taking over minute elapsed time with os/360.
This improved to approx. 30 seconds when HASP spooling was installed.

I was eventually hired to be responsible for production operating system
... part of old presentation I made at the fall68 share meeting in
Atlantic City
http://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14

I did a lot of work on the underlying OS/360 system to get fortran
student jobs down to 12.9 seconds (from 30 seconds) ... mostly carefully
arrainging order of all sorts of system stuff on disk to optimize disk
arm seek operation. Finally when WATFOR was installed, elapsed time for
student jobs finally got down to better than where they had been on 709.

I also rewrote large sections of cp67 to significantly reduce virtual
machine overhead. It was never actually enough to move to
full-production virtual machine operation ... I got to mostly play with
cp67 on the weekends ... and the 360/67 ran as standard 360/65 with
os/360 during the week.

Outgunned: How Security Tech Is Failing Us

Outgunned: How Security Tech Is Failing Us
http://www.informationweek.com/news/security/attacks/240003172

from above:
Our testing shows we're spending billions on defenses that are no
match for the stealthy attacks being thrown at us today. What can be
done?

... snip ...

Long time observation is the basic infrastructure isn't defensible
... like being in a valley with the enemy occupying all the high
ground ... or spending all the money on 6' thick vault door and
setting it up in an open field (not bothering with vault, no walls,
floors, ceilings, etc)

Before Jim Gray disappeared, he con'ed me into interviewing for Chief
Security Architect in Redmond. The interview spanned a few weeks but
we could never come to agreement on a number of issues

15 years ago I suggested that ISPs filter majority of the
transmissions that result in threats and exploits. At the time, the
ISPs claimed that they didn't have such filtering capability
... however, many were actually doing various kinds of filtering when
it was in their financial interest ... filtering that demonstrated
that they could do the other kind also.

one possible explanation why they weren't interested in doing such
filtering was that it might expose them to liability when there was
some exploit (eliminating 99% of the problems ... but not 100%
... somebody might sue them in the remaining 1% of the cases).

the current call does something similar but shifts the liability focus
from the traffic that results in most infections to the infected
machines.

As to the "sick" PCs ... I think I drew an analogy at the time to
vehicle inspections and removing "unsafe" vehicles from the highway
(metaphor in the period to information superhighway)

i somewhat preferred the vehicle metaphor with safety inspections and
license to drive.

seat belts, air bags, safety glass, bumpers, texting while driving,
working brakes, adequate tread, DUI, etc. lots of arguments against
requiring internet safety are similar to what went on in the past
about auto safety

there is some health analogy if you read descriptions of 1919 flu
... had to have recent doctor's examination health certificate to
travel ... checked at railroad stations.

--
virtualization experience starting Jan1968, online at home since Mar1970

PL/1 as first language

"Joe Morris" <j.c.morris@verizon.net> writes:
Unless both my memory and the SHARE web site are wrong, fall 1968 SHARE was
in Atlantic City (Chalfonte/Haddon Hall as the conference hotel...I got
stuck in the Treymore). This was the joint meeting with GUIDE where the two
organizations were proposing a merger, but should be better remembered as
the birthplace of the HASP sing-along.

Fall 1969 was in Boston. Memory says that's where the users raked IBM over
the coals for the incredibly bad OS/360 Release 17 fiasco. One memory from
one of the OS/360 project sessions at that meeting:

I also had pure HASP presentation about mods. to HASP for tty/2741
terminal support with editor that implemented CMS edit syntax (written
from scratch) that was done on HASP OS/360 Release 15/16 ... which would
have been aug69 Boston ("SHARE 33" ... not listed in above).

The univ. sent me the week before to see member (head?) of HASP
committee who was at Cornell datacenter in Ithaca. It was memorable trip
... flew into La Guardia for connecting flight to Ithaca ... which was
DC3 at (marine) terminal (now usair?) on the other side of field. There
was thunderstorm going thru the area and sat on the ground in plane with
smell of hot fuel oil for hr or more ... before taking off. Most of the
way was still flying thru middle of thunderstom and I spent most of the
time barfing into bag. When flight stopped at Elmira, I staggered off to
find motel room ... next morning got a rental car and then drove to
Ithaca (and was late for the meeting). a couple past refs:
http://www.garlic.com/~lynn/2006b.html#27 IBM 610 workstation computer
http://www.garlic.com/~lynn/2006e.html#1 About TLB in lower-level caches
http://www.garlic.com/~lynn/2007j.html#79 IBM 360 Model 20 Questions

Later ran into the person at IBM ... he was doing part of the
801/risc/Iliad program that was to replace the large variety of
different internal microprocessors ... the part that was going to use
801/risc/Iliad for microprocessor to be used in the followon for 4341
(4381). When those efforts floundered, he left and joined HP (on their
risc processor effort) and later shows up as one of the principal
architects for Itanium. At IBM, he was also credited with 3033
dual-address support (used by MVS).

--
virtualization experience starting Jan1968, online at home since Mar1970

Fujitsu starts shipping 800 rack 80,000 chip 'K' supercomputer

Brett Davis <ggtgp@yahoo.com> writes:
Mainframes:
IBM Z class (harks back to System 360), and three fringe ASIC designs,
one by Fujitsu, and two by Unisys.
MiniComputers:
None? A market once dominated by the VAX line.
IBM had at least two major mini lines, do either of these still exist?

43xx (360 "mainframe") sold into the same mid-range market as VAX in the
same time frame and in the one-to-few order market, sold similar
volumes. the big difference with total 43xx volumes were large
multi-hundred orders by large corporations ... sort of the leading edge
of distributed computing & departmental servers.

internally, the proliferation of 43xx machines contributed to scarcity
of conference rooms (as they were being taken over for 43xx
machines). the internal network had been larger than the
arpanet/internet from just about the beginning until possibly late '85
or early '86 ... and the large proliferation of 43xx in the early 80s
contributed to internal network passing 1000 nodes when arpanet/internet
wasn't much more than 255 nodes.

mid-range started to fall to large PCs and workstations in the mid-80s
... can be seen in the vax volumes. in this time-frame, the 4341
follow-on, ... i.e 4381 was expected to see similar volume growth as
experienced by 4341 in late 70s/early 80s ... but the mid-range market
had already started to move.

circa 1980, there was a program to migrate the large number of different
internal microprocessors to 801/risc (line of iliad chips) ... including
the microprocessor for 4381 and the microprocessor for the s/38
follow-on in the as/400. that effort floundered for various reasons
... and there was another round of cisc microprocessors (and some number
of the engineers left and show up working on risc efforts at other
vendors).

as it happened ... as/400 eventually did migrate to a 801/risc chip in
the 90s (power/pc).

and then old email from the end of jan ... when it was still commercial
and scientific/technical ... but possibly just hrs before it was
transferred and we were told we couldn't work on anything with more than
four processors
http://www.garlic.com/~lynn/2006x.html#email920129

Michael S <already5chosen@yahoo.com> writes:
VAX is dead, but you can run VMS on Itanium. OpenVMS v.8.4 even runs
on newest Tukwilla blades.
Also, unlike IBM, HP does not prohibit people from running VAX/VMS and
AXP/OpenVMS on emulators. In fact, emulators from stromasys running on
HP Proliant are even officially supported by HP.

for much of its life, mainframe 360 was done by various kinds of cisc
processors running microcode emulation ... the low-to-mid-range tended
to vertical "microcode" ... was similar in many ways to recent software
emulators on more well-known processors. the high-end tended to be
horizontal microcode. it wasn't until more recent that you see cisc
processors directly implementing mainframe 360 instructions.

the large profussion of internal processors was motivation for the
1980-circa effort to converge all those (internal) processors to
801/risc.

old reference to part of mid-70s project I did looking at high-use
(virtual machine) kernel pathlengths to drop directly into microcode for
mid-range processors. The mainframe microcode emulation tended to run
approx. 10:1 native instruction per 360/370. It turned out (at least for
kernel code), there tended to be about byte-for-byte equivalent between
360 and native instructions. I was given requirement that machine had
6000 bytes of available space for new microcode and I was to find the
highest used 6000 bytes of kernel instructions (and design interface
between kernel and microcode). This basically achieved 10:1 thruput
increase for those 6000 bytes:
http://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist

this was later supplanted with SIE instruction on 3081 for doing virtual
machine operation. The issue on 3081 was it had limited microcode space
... and SIE instruction would result in the service processor paging in
microcode from 3310 FBA disk (3081 had some amount of paged microcode).

--
virtualization experience starting Jan1968, online at home since Mar1970

When Merchants Get Rid Of Cardholder Data

This has been tried several times in the past but has floundered with
various difficulties like merchants being able to handle charge-backs
and needing access to the data.

The x9a10 financial standard working group had been given the
requirement to preserve the integrity of the financial infrastructure
for ALL retail payments and did threat & vulnerabilities studies of
the various types, kinds, and environments involving retail payments
in the mid-90s. It came up with the x9.59 standard which slightly
tweaked the paradigm and eliminated the vulnerability & threat to
cardholder data (i.e. crooks being able to use the information for
performing fraudulent financial transactions). It did nothing about
data breaches, skimming, evesdropping, insiders and/or other
mechanisms where cardholder information might leak ... it just
eliminated the financial fraud threat when such leakage occurs (and
therefor all financial motivation to the crooks).

recent linkedin posting discussing some of the patent work covering
many pieces of the solution (chips, chip fab. processing, post
fab. processing, issuing, enabling person-centric, lots of other
stuff; originally the claims were going to be packaged as over 100
patent applications)
http://www.garlic.com/~lynn/2010m.html#63

part of the issue was having been given the ALL requirement
... debit, credit, stored-value, high-value, point-of-sale, attended,
unattended, internet, wireless, etc. then the transit industry came
along and wanted the same solution to work there ... which added that
the same chip work as well with contactless as contact, be
simultaneously very high-value and at the same time low-value ... and
work within the elapsed time and power constraints of transit
turnstile. in the mid-90s, I had semi-facetiously said I would take a
$500 milspec part and aggressively cost reduce it by 2-3 orders of
magnitude while increasing the security.

Mainframe hacking?

joe.mc24@YAHOO.COM (Joe Mc) writes:
I'm getting into a rather heated argument with a non mainframe
colleague about whether the mainframe has been hacked or
not. Legitimate hacking, not a disgruntled employee doing something
illegal and not loss of tapes or other media. I'm talking the
mainframe platform. Thoughts?

once I took the bait on such a taunt

prior to virtual memory being announced for 370 ... an internal document
found its way into the hands of somebody from the press (sort of a
corporate pentagon papers thing). there was big investigation and
afterwards all the corporate copiers were retrofitted with an
(unique) ID-tag that would show up on every page copied. for an
example see the bottom of each of these scanned pages from
gray presentation
http://www.garlic.com/~lynn/grayft84.pdf

somewhat as a result, the future system project (was going to replace
all 370, as different from 360/370 as 360 had been different from prior
generations) went to vm370-based softcopy documents ... with some
additional security features added to vm370s that hosted the future
system documents. misc. past posts mentioning future system
http://www.garlic.com/~lynn/submain.html#futuresys

One weekend I had some benchmark time in machine room that contained
one such vm370 ... and some of the people responsible (for special
security addons supporting super-secure softcopy future system
documents) taunted that even if I was left alone in the machine room
... I still wouldn't be able to access the documents. I countered that
it would take less than five minutes ... most of the time was making
sure the system was disabled from any access external to the machine
room ... and then I flipped a bit in storage ... so
anything/everything entered was accepted as valid password.

as to virus ... there is xmas that occured on bitnet almost exactly a
year before morris worm on internet. bitnet was corporate sponsored
higher educational network ... using similar technology to that used
in the (mostly vm370 based) internal network (which was larger than
the arpanet/internet from just about the beginning until possibly late
85 or early 86) ... misc. past posts mentioning bitnet (&/or EARN)
http://www.garlic.com/~lynn/subnetwork.html#bitnetmisc. past posts mentioning internal network
http://www.garlic.com/~lynn/subnetwork.html#internalnet

back in the days of open bar ... and one of the activities was seeing
how many bottles could be slipped into your jacket for when SCIDS was
closed. The story I was told at the time was that IBM'ers weren't
allowed to include alcohol in travel expenses ... so it was bundled as
part of SHARE registration. The other activity was thursday night
(actually very early friday am), after scids was closed ... the
unconsumed beverage was taken to the SHARE president's suite ... and one
of the activities was helping limit the amount that had to be dealt
with.

--
virtualization experience starting Jan1968, online at home since Mar1970

with references to commodity disk having MTBF in the million-plus hrs
and google's (then) recently published study: "Failure Trends in a
Large Disk Drive Population" (involving more than 100,000 drives).

there was similar but different study from the period that Google's
computing infrastructure was about 1/3rd of the cost (including doing
their own assembly, management, maintenance) compared to ordering from
brand name vendor ... by carefully studying which components to buy in
quantity and puttting them together themselves.

--
virtualization experience starting Jan1968, online at home since Mar1970

dba@LISTS.DUDA.COM (David Andrews) writes:
You could stay up all night typing in random subscriber IDs to see what
you would get. It didn't take long to discover that the first three
digits of a subscriber ID was the telephone area code of the subscriber.
That cut down on the search space quite a bit. Area code 212 (New York
City) was a target-rich environment!

in 90s and there were summaries of wardialing ... dialing every number
in an area code/region ... looking for modems ... and then taking
signatures of the modems & the connected systems (some areas had 3% of
numbers with some sort of modem connection).

there was some amount spent in the financial industry meetings (in the
white house annex) about Y2K remediation ... but a really hot topic was
the ISACs (information sharing database of vulnerabilities and exploits)
... which had bunch of stuff ... a lot with mainframe dataprocessing
... since a lot of financial industry is mainframe based. Of major
concern (and a lot of discussion) was constructing the ISAC in such a
way that it wouldn't be subject to FOIA
https://en.wikipedia.org/wiki/Freedom_of_Information_Act_%28United_States%29

Financial ISAC website
http://www.fsisac.com/

launced in 1999
http://www.fsisac.com/faq/

since 9/11 ... public facing tends to be much more about terrorists

I was tangentially involved with the (original) cal. data breach
notification act ... having been brought in to help wordsmith the
electronic signature legislation. Several of the participants were
heavily involved with consumer privacy issues and had done detailed,
indepth surveys and found that the NO.1 issue was "identity theft",
namely the "accound fraud" form where cardholder details from data
breach was being used by criminals for fraudulent financial
transactions.

The issue at the time was that little or nothing appeared to being done
about the problem (not even being publicized) ... so they seemed to
think that the publicity from databreach notifications might motivate
the institutions to provide corrective actions and countermeasures.

A major issue was that institutions will put in place security measures
to counteract bad things (risks) happending to the institutions. The
problem with the "account fraud" scenario ... is that typically the
"fraudulent financial transactions" were against consumer accounts
... not against the institution ... and therefor the institutions had
nothing at risk and therefor little motivation to provide security.

--
virtualization experience starting Jan1968, online at home since Mar1970

Upthread I mentioned being brought in to consult with small
client/server startup that wanted to do payment transactions on their
server (now frequently called "electronic commerce"). Part of that
effort was something called a "payment gateway" ... basically sat on
the internet and handled payment transactions between the e-commerce
webservers and the payment networks. Part of the "payment gateway"
internet-facing infrastructure was systems that booted and ran from
R/O media (with logging to a write-only interface and periodic scans
of memory looking for real-time compromises).
http://www.garlic.com/~lynn/subnetwork.html#gateway

"live cd" close a number of vulnerabilities ... but there are some
kinds of BIOS exploits that they are subject to (that may have
happened when not running "live cd"). A similar approach ... is using
the new, new (old) thing, virtualization to provide R/O isolation (but
with software) ... where it is always active and would include
countermeasures to BIOS compromises.

I was doing a lot of work as undergraduate ... and would periodically
get requests from the vendor for enhancements ... in retrospect some
of the requests may have originated from that particular customer set.

PCs started out as unconnected with little protection and no
countermeasures for the hostile environment of the internet ... sort
of like going out the airlock of the space station w/o space suit

Connecting to the internet is somewhat like the early days of the
automobile, before safety engineering, bumpers, safety glass,
collapsible steering column, crush zones, padded dashboards,
seatbelts, airbags, guardrails, traffic signs, rules-of-the-road, etc.

The new, new (old) technology, virtualization is being used a little
like space suit ... total disposable browser internet environment
... that is created from scratch for period and disposed of when
finished (along with any compromises). The virtualization boundaries
compartmentalize and keep the hostile effects away from the underlying
environment.

Of course, virtualiztion can also be inverted and used by the
badguys. Say a public internet PC ... where crooks have compromised
BIOS that always boots a stripped down hypervisor ... which in turns
boots whatever the customer is expecting. The hypervisor allows
complete monitoring of all PC activity ... undetected by traditional
malware countermeasures (despite many claims over the past decade of
various vendors).

A large part of the vulnerability in the financial world is the
extensive use of "static" data for authentication ... i.e. pins,
passwords, or even just the account number ... where simple harvesting
of the information enables various forms of replay
attacks. Eliminating "static" data for authentication is
countermeasure to the harvesting/evesdropping (replay attack)
compromises ... a little x-over from the "When Merchants Get Rid Of
Cardholder Data" discussion ... also archived here:
http://www.garlic.com/~lynn/2010n.html#72

That still leaves the active attacks ... where a compromised end-point
basically impersonates the human for generation of a fraudulent
transaction using whatever non-static (authentication) technology is
deployed. The EU FINREAD standard from the last century was designed
as countermeasure to "active" attacks where compromised
end-point impersonates the human (effectively by moving the financial
transaction end-point out to a special hardened device with its own
unspoofable display & keypad).

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe hacking?

phil@VOLTAGE.COM (Phil Smith) writes:
Long ago and far away, a friend was looking at the VSE microfiche and
found an undocumented SVC that stored the top half of a register value
in the address contained in the bottom half of the register. He
promptly wrote a program that used that SVC to gain control of the
system. (He was working at IBM, so this was an internal thing.)

at the end of last century there is infamous case of large financial
institution (with large number of ATM machines) outsourcing the Y2K
remediation of their backend financial transaction processing system
to a software company (selected on basis of being low bidder)
... which they found out much later was a front for a criminal
organization (eventually tripping across some very peculiar pieces of
code that would do some stealthy transactions, that could be triggered
by very specific combination of entries from ATM machine).

Note a lot of prepaid originated as stored-value in Europe in
conjunction with chipcards ... which was being used to offset the
scarce and very expensive online connectivity (something that wasn't a
problem in the US and has been addressed over the past two decades for
most of the rest of the world). In the early to mid-90s ... something
analogous was introduced in the US ... but using magstripe and online
(and existing POS terminals and acquiring networks). Those European
chipcard programs seemed to evaporate in the late 90s not long after
EU central banks announced that the stored-value/pre-paid operators
would have to start paying interest on the "stored-value" (i.e. most
of the operators were motivated by the float on the "stored-value";
significant improvement in connectivity and drop in price also
contributed).

--
virtualization experience starting Jan1968, online at home since Mar1970

paul c <anonymous@not-for-mail.invalid> writes:
I was given one or should I say I just took it. I worked for a
hole-in-the-wall research company and we couldn't afford the
maintenance. The parts were all what I called NASA-quality, eg., the
3 HP three-phase electric motor had a label on it saying 'lubricate
every 20 years'. The IBM model number was 2321. It was ten years old
and had 1,700 hours on the clock, so the university we got it from
obviously had problems keeping it running too.

I was undergraduate at univ. where the library got ONR grant for doing
online catalog ... part of the money went to getting a 2321 (to host the
online catalog). The project was also selected to be one of the original
betatest sites for CICS product ... and I got tasked to support/debug
CICS.

Decade later, I was at SJR and doing some of the stuff for original
relational/sql implementation (system/r with Jim Gray) ... when Jim left
for tandem ... he palmed off bunch of stuff on me.

Another decade and half ... happened to be doing some work with somebody
that had been one of the engineers that developed the 2321 (wasn't one
of the engineering managers mentioned in the columbia article)

--
virtualization experience starting Jan1968, online at home since Mar1970

paul c <anonymous@not-for-mail.invalid> writes:
The early 'databases' used by the first wide-reaching online network
systems such as PARS involved ruthless reduction of 'attributes' as it
were. Yet it was still possible for the real-time flight and cargo
databases' of large airlines to be stored in some hundreds of MB's, in
other words they could be recorded in the main memory of pretty much
any of today's consumer pc's. Had such memory been available thirty
or forty years ago, I'd venture that the programming landscape would
look different today. There remains much bowing and scraping towards
legacy obstacles. From papers they wrote, it looks like even the
System R people's thinking was dominated more by past physical history
than the likely future. Me too, it wasn't until the 1980's when I
could actually put a computer under my arm that I started to realize
how much more important the logical side of programming is. I think
younger people are hide-bound in a different way, there are now so
many different programming languages and therefore idioms which
encourage them to think that all that can be invented has already been
invented.

in the late 70s, there was contention between the 60s "physical" IMS
dbms group and the system/r group. The IMS group contending that the
implicit index of system/r doubled the physical space needed on disk and
there could be 4-5 times increase in disk i/o (processing index). the
system/r group countered that the "implicit" index of system/r
significantly reduced the human and adminstrative overhead involved in
managing a large IMS database (direct record pointers exposed as part of
the data a programmer had to handle ... and be updated if the DBMS was
re-organized).

starting in the 80s ... there were dramatic increases in the amount of
system storage ... allowed significant caching of indexes (mitigating a
lot of additional RDBMS I/O overhead) as well as dramatic increase in amount
of disk space and reduction in price/megabyte ... minimizing the
additional index overhead ... at the same time people expertise was
becoming scarce and more expensive (becoming market inhibitor for IMS).

consulting with IMS development group was one of the things Jim palmed
off on me when he left for Tandem.

I actually had a project in the mid-90s to look at the ten impossible
things related to flt/route finding ... i.e. getting from point A to
point B (for one of the large airline res systems). the implementation
had ondisk database (design from late 50s, early 60s).

in the mid-90s, the full OAG raw data for all scheduled commercial
flts in the world was a little over 200mbytes. the reservation systems
turned that into large gbytes of DBMS with huge index (again design
from 60s). I had recently come off a project doing optimal layout for
large chip & board design ... and so condensed the raw OAG master
file into approx 30mbytes with lots of organization ... that all fit
into memory ... and then ran real-time walk thru the data (rather than
dbms lookup ... with various optimization ran close to 100 times
faster than dbms implementation).

It now easily fits in most present day smartphones.

--
virtualization experience starting Jan1968, online at home since Mar1970

paul c <anonymous@not-for-mail.invalid> writes:
Later I met an Amdahl salesman who said, "give me more of this
relational stuff, I can sell all the 'boxes' it needs". When a boss
of mine plunked for an Amdahl cpu, the local IBM field manager invited
him over for coffee and 'career counselling'. That was in the
mid-1980's. By the early 1990's even IBM could see which way the wind
was blowing and introduced the RS6000, basically a Unix machine, an
attempt to hedge their bets, having already blown their early PC
lead. In other words, just like other people do now, they knew
something big was happening, they just didn't know what.

I knew many of the people at amdahl ... including the guy doing the
amdahl dbms HURON.

801/risc was "invented" by john cocke (at ibm) in the 70s ... I've
claimed that it was attempting to go to the opposite extreme of the
(failed) future system project ... some past future system posts
http://www.garlic.com/~lynn/submain.html#futuresys

around 1980 there was effort to converge the large variety of different
internal microprocessors to 801 (iliad) ... controller microprocessors,
microprocessors used for low & midrange mainframes, microprocess for the
s/38 followon ... the as/400, etc. For various reasons that effort
failed ... and some number of engineers left and then show up at other
vendors working on risc efforts. there was the OPD joint effort with
research for ROMP and the displaywriter followon. For various reasons
that got canceled and group looked around for something else to do with
the hardware ... and hit on the unix workstation market ... eventually
releasing the PC/RT with AIXv2 (they hired the group that had done the
port of unix to the pc ... to do a port to pc/rt). the group then worked
on followon chip to ROMP ... called RIOS ... which was eventually
announced and shipped as RS/6000 workstation with AIXv3. misc. past
posts mentioning 801, risc, romp, rios, pc/rt, iliad, rs/6000, etc.
http://www.garlic.com/~lynn/subtopic.html#801

note that while risc/Iliad floundered and various cisc chips were done,
including for the as/400 ... as/400 did finally move to 801/risc
(power/pc) chip in the 90s.

in the mid-80s timeframe I had done some work on putting large number of
(801) "blue iliad" chips into large number of racks for various kinds of
dataprocessing. "blue iliad" was never finished. However, with rios ...
I started project to do the ha/cmp product along with cluster scaleup.
some old email related to cluster scaleup
http://www.garlic.com/~lynn/lhwemail.html#medusa

where there was lots of work on cluster scaleup to address both
commercial as well as numerical intensive (supercomputer) markets. old
post mentioning early jan92 meeting in ellison's conference room on
RDBMS scaleup.
http://www.garlic.com/~lynn/95.html#13

however in that time frame, the corporate supercomputer effort was out
looking for technologies and discovered what we were working on.

there was some folklore that commerical mainframe dbms group didn't mind
because what I was doing was possibly five years ahead of where they
were at. i had also been asked to write a section for the corporate
continuous availability strategy document ... but it was pulled when
both rochester (as/400) and pok (mainframe) complained (that they
couldn't meet the objectives). I had coined the terms disaster
survivability and geographic survivability (to differentiate from
disaster/recovery) when I was out marketing ha/cmp ... some past posts
mentioning availability
http://www.garlic.com/~lynn/submain.html#availability

there is also some folklore that the distributed lock manager I had
designed for ha/cmp was reverse engineered and used by some rdbms for
cluster operation on other vendor platforms.

part of the cluster rdbms was working with various rdbms vendors that
had both vax/cluster as well as unix implementations. some of the ha/cmp
distributed lock manager was done to ease migration of their vax/cluster
support over onto unix platform (one such rdbms vendor did contribute
the list of the top ten things wrong with the vax/cluster distributed
lock manager ... that needed fixing).

there first was tech. transfer of system/r from bldg. 28 to endicott for
sql/ds. one of the people mentioned in the ellison conference room
meeting claimed to have handled much of the tech transfer from endicott
to STL for (mainframe) db2.

a completely different rdbms ... originally called shelby ... done in
C-language in Toronto for OS/2 ... was eventually announced for
non-mainframe platforms as DB2 (also ... although it was/is a completely
different implementation).

--
virtualization experience starting Jan1968, online at home since Mar1970

paul c <anonymous@not-for-mail.invalid> writes:
Also heard that IBM's mainframe salesmen who made big commissions
selling the hardware to run IMS (not sure exactly when the software
itself was 'unbundled') ran an active campaign within IBM to discredit
Codd. It was very personal and nasty and may have caused him to have
a stroke.

presumably as a result of various litigation (including by the gov)
... there was the 23jun69 unbundling announcement and starting to charge
for "application" software (although they managed to make the case that
kernel software should still be free). misc. past posts mentioning
unbundling
http://www.garlic.com/~lynn/submain.html#unbundling

one of the other things I worked on as undergraduate at the univ. was a
mainframe clone controller ... four of us we written up & blamed for
being responsible for some amount of the clone controller business. some
past posts
http://www.garlic.com/~lynn/subtopic.html#360pcm

then in the early 70s the company started the future system effort
... which was going to completely replace the existing 360/370 mainframe
architecture ... and was as different as 360 had been different from
previous generations. various articles claim a major motivation was the
clone controller business. since there wasn't going to be any more
360/370 mainframe ... those software & hardware product pipelines were
allowed to go dry. then when future system effort failed, there was a mad
rush to get hardware & softare products back into the 370 pipeline. the
distraction of the future system effort is claimed to have contributed
significantly to allowing mainframe clones (like amdahl) to gain market
foothold. misc. past posts
http://www.garlic.com/~lynn/submain.html#futuresys

one of the results of the mainframe clones gaining market foothold and
then mad rush to get stuff back into the product pipeline ... was
decision to start charging (also) for kernel software. during the future
system period, I had continued to work on 370 software (and was making
less than complimentary comments about how feasable/practical FS
was). Then, in the mad rush to get 370 software back into the product
pipeline ... some piece of what I had been doing was chosen for product
release ... and also selected to be the guinea pig for starting to
charge for kernel software (i had to spend time with business people on
policies for charging for kernel software).

now, amdahl left before future system effort and claimed not to know
anything about it. however, he gave a seminar in large auditorium at MIT
in the early 70s about starting his new clone computer business. One of
the questions from the audience was what arguments/justification did he
use to get funding. He made some reference to customers had already
spent something like $200B on developing 360-based software and even if
IBM were to completely walk away from 360 (might be considered a veiled
reference to FS), that was enough to keep him in business through the
end of the century.

--
virtualization experience starting Jan1968, online at home since Mar1970

paul c <anonymous@not-for-mail.invalid> writes:
In the 1970's IBM brought out a system called the S/38. It was really
radical, having a linear addressing scheme that merged memory and
whatever devices were attached, so the fixation on individual devices
was ignored. Apparently internal politics at IBM limited the size of
the S/38 so that it wouldn't compete with the big mainframes.
Customer operations staff used to need to go to IBM courses to learn
what operating system options needed to be toggled to enable the
addition of peripherals. Some other companies like Burroughs had
machines even before then that required no software changes to attach
a new disk drive, in some cases not even down time so Burroughs didn't
make any money charging for courses to learn how to do that. Wasn't
Burroughs stupid!

folklore is that with the failure of FS ... some of the people retreated
to rochester and came out with an extremely scaled-back subset.

One of the issues that helped kill FS was study that claimed that if an
FS machine was built from fastest 370 technology then available
(370/195), applications running on that machine would have thruput of
370/145 (about 30 times slower machine). s/38 wasn't limited ... it just
was selling into market that didn't notice a possible 30 times slow
down.

s/38 large linear space motivated it to be original disk RAID
adopter. there was folklore about taking days to restore a s/38 after
a single disk failure ... since all disks were treated as single pool
with possible scatter allocation occuring across all disks (you didn't
back up a single disk ... you backed up the whole system as an single
unit, which then required single complete restore, even if there was
only a single disk failure). up until then there needed to some
attention paid to disk allocation/recovery since single disk failure
was a common failure/recovery scenario.

besides letting me play DBMS in bldg. 28 ... they also let me play
disk engineer in bldgs. 14&15 ... one of the engineers there has
patent from the 70s on RAID technology originally deployed by
s/38. misc. past posts getting to play disk engineer
http://www.garlic.com/~lynn/subtopic.html#disk

independent of that there has been this whole CKD DASD vis-a-vis FBA
discussion. CKD DASD was something of sensible trade-off in the 60s
... but the balance was already starting to shift by the mid-70s ...
with the disk division came out with FBA disks. All the platforms except
the favorite son, mainstream MVS operating system, added support for
FBA512 (where it was no longer necessary to change software for new
disks with different sizes and/or geometries). misc. past posts
metion ckd, fba, dasd, etc
http://www.garlic.com/~lynn/submain.html#dasd

All devices are now FBA512 ... even those that are supposedly CKD DASD
... which is actually a hardware emulation on top of underlying FBA512
device.

There is now some drive to move to FBA4096 implementation (because of
various factors like error correction efficiencies) and there are recent
discussions about various platforms which may or may not be able to
handle FBA4096 transparently.

--
virtualization experience starting Jan1968, online at home since Mar1970

paul c <anonymous@not-for-mail.invalid> writes:
At the same time (early 1990's), Steve Balmer had briefly retired from
Microsoft with his early profits and Amdahl tried but failed to hire
him, still dreaming that technology can be managed by the right person
(today, whatever crystal ball skills he has aren't stopping erosion of
the Microsoft franchise, who knows when that will turn into an
avalanche). Not much later, Codd was willing to go to Amdahl to take
over one of their software products and there was no argument about
money (it was big money), it fell through because they wouldn't give
him the title he wanted.

codd's office was on 2nd flr of bldg. 28. I had an office on the first
floor ... a few doors down from backus' office. I also had part of wing
and labs out in los gatos lab (bldg. 29) ... both bldg. 28 and bldg. 29
have since been plowed under (in mid-80s, research moved out of bldg. 28
and up the hill).

before Jim disappeared ... he con'ed me into interviewing for chief
security architect in redmond ... the interview went on for a few weeks
... but couldn't come to agreement.