mentions that variable rate loans nearly took citibank down in the 80s
... which resulted in them totally getting out of the mortgage business.
recent reference
http://www.garlic.com/~lynn/2007r.html#60 Fixing our fraying Internet Infrastructure

the other item mentioned in the old, long winded, tome was that last
incident involving mortgage market in the 80s resulted in an enormous
bailout ... so large that it is carried offbooks ... since it would
otherwise swamp the budget. the claim is that it is so large that it
totally wipes out all real estate appreciation that occured in the 70s
and 80s.

it apparently ranked number one in terms of unfunded obligations ...
at least until the recent round with medicare drug bill ... that
comptroller general has railed about ... numbers claiming to possibly
be something like four times larger than the real estate bailout from
the 80s.

SKnutson@GEICO.COM (Knutson, Sam) writes:
You should have the PTFs for z/OS APAR OA17114 installed if you are
using paged fixed buffers in DB2 V8. Not having it was one of the
causes of a z/OS outage here when a DB2 DBA accidently overcommitted
storage to DB2.

aka application page fixed buffers ... allows applications to specify
the "real addresses" in the channel program ... avoiding the dynamic
channel program translation (creating a duplicate of the channel program
passed by excp/svc0) and dynamic page fixing that otherwise has to occur
on every i/o operations (however, it can eliminate pageable storage
needed by the rest of system)

from above:
Some good news: after a long hard decade, OpenPGP is now on standards
track. That means that it is a standard, more or less, for the rest of
us, and the IETF process will make it a "full standard" according to
their own process in due course.

part of the issue was a lot of resistance to any sort of certificate-less
operation in the ietf pkix contingent. some of the pkix backing of
certificate-based PKI operation dates back to the early 90s in the days
of x.509 identity digital certificates. this has been my observation
that by the mid-90s, many institutions had realized that x.509 identity
digital certificates, increasingly overloaded with excessive personal
information, represented significant privacy and liability issues.
as a result, there was many institutions that retrenched to
relying-party-only certificates
http://www.garlic.com/~lynn/subpubkey.html#rpo

effectively containing some form of record locator (account number,
userid, etc, where the necessary information was actually located)
and a public key. however, it was trivial to demonstrate that

1) this apparently was attempting to recoup some of the massive
investment that went into PKI-type deployments

2) the PKI/digital certificates were actually redundant and superfluous
(aka the public key was frequently already in the same record with all
the other information).

part of the orientation of x9.59 financial standard protocol was not
just that the digital certificates were redundant and
superfluous ... but that even the abbreviated relying-party-only
digital certificates would represent adding one hundred times payload
and processing bloat to existing payment transactions
http://www.garlic.com/~lynn/subpubkey.html#bloat

from above:
Those selling solutions, or more accurately, those marketing IT
solutions often choose to make products sound new and exciting. Quite
why they do so has often puzzled me since I, as a former IT manager,
have always been highly skeptical of anything really new as it usually
means trouble.

... snip ...

another recent virtualization item ... pushing some of the support down
into BIOS ... getting more analogous to PR/SM and LPARs that appeared
with 3090s in the 80s.

marty zimelis wrote:
Phil,
Unless there was something else out there (a poster or whatever), that
would have been me doing a riff in my VM Performance classes, first for
Amdahl, then for Velocity. Your buddy's time frame is about right (15 years
ago). I was attempting to emphasize the impact of an RPS miss (show of
hands: who remembers what that was?) on response time.

The riff started by me "complaining" that I didn't have a good intuitive
grasp of how fast CPUs were (tens of nanosecond cycle times at that point),
so "let's slow down our timeframe and say a CPU cycle is one second. Then a
page fault from Xstore is satisfied in [nn minutes], a DASD I/O satisfied
from cache takes [mm hours] and one that has to go to the real disk takes
[kk days]. An RPS miss adds [I think it was 16 hours] to that."

i had started making statements that disk relative system thruput had degraded by
an order of magnitude over a period of yr (processors had gotten much faster than
disks had gotten faster)

at some pt, somebody in gpd (disk division) took exception and assigned the gpd
performance group to refute the statements. after several weeks, they effectively
came back and said that i had understated the degradation because taking into
account (the introduction of) RPS-miss actually made it worse.

part of this was when i had started doing dynamic adaptive resource
management, i attempted to include (dynamic adaptive) scheduling to
the bottleneck (as undergraduate in the 60s). in the 70s, bottlenecks
started shifting from real storage to disk ... and you started seeing
real storage being used more and more as "caching" ... either outboard
in devices ... or by the system directly in processor storage (as
means of attempting to compensate for disks growing system thruput
bottleneck). misc. past posts mentioning resource manager
http://www.garlic.com/~lynn/subtopic.html#fairshare

somewhat a side investigation was that we had implemented disk record
access trace and cache model in the late 70s. the cache model looked
at all sorts of trade-offs based on actual disk record access traces
(from a large number of different kinds of production environments)

one of the findings ... was given all other things being equal, one
large common system cache was always better than partitioning the same
amount of electronic storage out into channel-level, controller-level,
and/or device-level caches (from cache efficiency standpoint). The
counter forces have been that their have been limitations on total
system memory, cost differential between different kinds of electronic
storage, and/or processor overhead in managing system-level cache.

of course this somewhat supported the work that i had (also as
undergraduate in the 60s) done on global LRU replacement
algorithms vis-a-vis "local LRU" replacement algorithms (some work
that was going on in the 60s about the same time i was working
on global LRU replacement). misc. past posts mentioning
replacement algorithms and cache management
http://www.garlic.com/~lynn/subtopic.html#wsclock

this even dragged me into a festish that brewed in the 80s over a
stanford phd thesis on global LRU replacement (vis-a-vis local
lru replacement).

i had done the global LRU replacement stuff that shipped in
cp67 (and later vm370 when the resource manager reintroduced some cp67
technology back into vm370). the grenoble science center had done work
on implementing local lru replacement for cp67 and published the
results in cacm in the early 70s. The cp67 global LRU running
on cambridge science center machine and the cp67 local lru running on
grenoble science center machine were the only live, production
comparisons.

ATMs

R.Skorupka@BREMULTIBANK.COM.PL (R.S.) writes:
Yes, I can. AFAIK z/OS version is not popular one. I know *big* ATM
installation which migrated from z/OS to NonStop. People from ACI
claimed that most of their installtions are not on mainframe.

Timothy: I like mainframes, I have personal interest in mainframe
business growth (at least survive), but I see no reason to be
unhonest.

ATMs

timothy.sipples@US.IBM.COM (Timothy Sipples) writes:
It's an interesting bit of history that the first Tandem machine wasn't
available until 1976, well after the first electronic ATM (1967) and lots
of other ATMs. From what I've read the first networked ATM appeared in
1968, and the first "popular" ATM (i.e. same model placed into service by
more than one bank) was the IBM 2984 starting in 1973. The IBM 2984
offered variable cash withdrawals and instantly deducted from your account,
so it was 100% on-line -- 34 years ago. (I remember my father using our
local bank's first ATM, newly installed, when I was a young child. It
seemed like magic.) Presumably most if not all of these ATMs connected to
IBM System/360s and /370s. Tandem came along after almost a decade of
ATMs.

early work was done at los gatos lab ... before i was spending any time
there. however, i do remember people talking about having worked on the
development. they had large supply of bills from numerous different
countries ... which they kept in a locked vault in the basement (for
testing with the machines during development). they also mentioned story
about one of the early machines going in across the street from a fast
food restaurant and kids feeding condiment packets into the card slot
(one of the early bug fixes was countermeasure for such an attack).

DASDBill2@AOL.COM (, IBM Mainframe Discussion List) writes:
I made a mistake. A track not in the cache would take on the order of 20
milliseconds, so that would equate to 20 days instead of one day. A track
already cached would result in an access time of one millisecond. If the 4K
block can be found in a buffer somewhere in virtual storage inside the processor,
it might take from 100 to 1000 instructions to find and access that data,
which would equate to 100 to 1000 seconds, or roughly one to 17 minutes. And
that assumes that the page containing the 4K block of data can be accessed
without a page fault resulting in a page-in operation (another I/O), in which
case we are back to several days to do the I/O.

By the way, it takes at least 5000 instructions in z/OS to start and finish
one I/O operation, so you can add about two hours of overhead to perform the
I/O that lasts for 20 days.

i had been making comments over a period of yrs that disk relative
system thruput had declined by an order of magnitude (i.e. disks were
getting faster but processors were getting much faster, faster). this
eventually led to somebody in the disk division (gpd) to assigning the
gpd performance group to refute the statements. after several weeks
they came back and effectively said that i had somewhat understated
the disk relative system thruput degradation ... when RPS-miss was
taken into account.

one of the issues is does the 5k instruction pathlength roundtrip from
EXCP (including channel program translation overhead) or roundtrip
just after it has been passed to i/o supervisor???

for comparison numbers ... i had gotten cp67 total "roundtrip" for
page fault down to approx. 500 instructions ... this included page
fault handling, page replacement algorithm, a prorated fraction of
page i/o write pathlength (which includes everything to start/finish
i/o), total page i/o read pathlength (including full i/o supervisor),
and two task switches thru dispatcher (one to switch to somebody else,
waiting on the page fault to finish and another to switch back after
the page i/o read finishes). to get it to 500 instructions involved
touch almost every piece of code involved in all of the operations.

I believe the "5000" instruction number was one of the reasons that
3090 expanded store was a synchronous instruction (since the
asynchronous overhead and all related gorp in mvs was so large).

earlier, there had been some number of "electronic" 2305 paging device
deployed at internal datacenters ... referred to as "1655" model (from
an outside vendor). these involved effectively low latency but limited
to channel transfer and cost whatever the asynchronous processing
overhead.

the 3090 expanded store was done because of physical packaging issues
... but later when physical packaging was no longer an issue ... there
were periodic discussions about configuring portions of regular memory
as simulated expanded store ... to compensate for various shortcomings
in page replacement algorithms.

with regard to the cp67 "500" instruction number vis-a-vis MVS ... i
would periodically take some heat regarding MVS having much more
robust error recovery as part of the 5000 number (even tho the 500
number was doing significantly more). so later when i was getting to
play in bldgs 14 & 15 (dasd engineering lab and dasd product test
lab), i had opportunity to rewrite vm370 i/o supervisor. the labs in
bldg. 14&15 were running processor "stand-alone" testing for the
dasd/controller "testcells" (one at a time). They had tried doing this
under MVS but had experienced 15min MTBF (system crashing and/or
hanging with just a single testcell). I undertook to completely
rewrite i/o supervisor to make it absolutely bullet proof, allowing
concurrent testcell operation in operating system environment. lots of
past posts mentioning getting to play disk engineer
http://www.garlic.com/~lynn/subtopic.html#disk

the attackers than are looking at 1) avoiding being identified and 2)
avoiding having the compromised device being identified.

the skimmed information (from compromised devices) are used to create
counterfeit cards (which are then used for fraudulent transactions).
however, the attackers may go to great lengths to avoid useage patterns
that might result identifying the original compromised device(s) (and
shutting them down as source of continued information)

some of the device skimming compromises include wireless &/or internet
harvesting techniques ...aka there is local recording storage inside the
device and the recordings can be harvested via wireless (or internet)
techniques ... as countermeasure to a suspect device being under
surveillance.

there were other efforts to address fraudulent payment transactions.
one involved a chipcard that was strongly oriented towards
countermeasure against lost/stolen cards. however, it was still
vulnerable to skimming attacks i.e. the chipcard was shown to be highly
resistant to crooks in possession of a lost/stolen card. however, the
card was still vulnerable to the growing incidents of skimming attacks
(enabling a counterfeit chipcard to be created)

the confidence in the integrity of this chipcard was such that
terminal/device interface was changed so that once a terminal believed
it was dealing with a valid chipcard, the terminal would follow
instructions from the chipcard.

now one of the fraud countermeasures in the current electronic payment
environment is that with online transactions, the account can be flagged
and new transactions not approved.

the terminal/chipcard interface change would have the terminal asking
the chipcard 1) has the correct PIN been entered, 2) should the
transaction be offline, and (if answered to #2 is YES) 3) is the
transaction within the card's credit limit. This new class of
counterfeit chipcards got the label YES CARD ... i.e. the
crooks would program the counterfeit chipcard to always answer "YES"
to all three questions
http://www.garlic.com/~lynn/subintegrity.html#yescard

Because of terminal relying on (potential counterfeit) chipcard for
replies to all three questions, the attacker didn't even need to know
the correct pin and since the transactions would always offline ...
flagging the account (as in online transactions) was no longer
effective. The skimming attack on terminal/devices was essentially
identical to what was already being used for magstripe card skimming.

There were some number of other fraud countermeasures built into the
(YES CARD) infrastructure for lost/stolen cards ... but the crooks
would program the counterfeit cards to disregard them.

Translation of IBM Basic Assembler to C?

Anne & Lynn Wheeler <lynn@garlic.com> writes:
the terminal/chipcard change would have the terminal asking the chipcard
1) has the correct PIN been entered, 2) should the transaction be
offline, and (if answered to #2 is YES) 3) is the transaction within the
card's credit limit. The new class of counterfeit chipcards got the
label YES CARD ... i.e. the crooks would program the counterfeit
chipcard to always answer YES to all three questions
http://www.garlic.com/~lynn/subintegrity.html#yescard

earlier in this decade ... something like million plus such (valid)
cards had a deployment. when the YES CARD attack was explained (dating
back to the previous decade), the response was that they would make sure
that the (valid) issued cards would never answer YES to question about
doing offline transaction ... i.e. the transactions would always be done
online, and therefor the "flagged" account countermeasure would be able
to prevent/limit possibly fraudulent transactions.

a possible problem or characteristic was that the individuals involved
were so chipcard myopic that they didn't comprehend that the YES CARD
attack is not against a valid card (which had been designed to have high
resistance to lost/stolen card vulnerabilities). in effect, the YES
CARD attack is against the card-accepting terminals (and the rest of
the infrastructure), not against valid cards.

if there hasn't been end-to-end threat and vulnerability analysis and/or
if the myopic focus is purely concentrated on attacks against valid
cards, then other kinds of things can be left wide-open. in this case,
the YES CARD attack took advantage of the apparent myopic focus on the
(valid) chipcards ... to also be able to get around the "account
flagging" fraud countermeasure (for online transactions), which has
worked well against limiting the total amount of fraud that might be
mounted against any specific account.

a general characteristic of these kinds of skimming attacks and
resulting counterfeit card fraudulent transactions ... have been that
the fraudulent transactions would tend to be done as far away as
possible from the compromised, skimming device (avoid casting suspicion
on the compromised, skimming device and therefor limiting its ongoing
usefullness).

Translation of IBM Basic Assembler to C?

"Charlie Gibbs" <cgibbs@kltpzyxm.invalid> writes:
A local scheme combined a skimmer with a camera mounted in the
ceiling to record PIN keystrokes (the unit was firmly attached
to the counter in the optimum position for the camera to view it).
At the time of the bust, the operators had 7000 fake cards, neatly
filed with corresponding PINs, ready to go.

is that standard security practice is that every unique security
domain requires a unique shared-secret (as countermeasure to
cross-domain attacks, say local garage ISP with highschool employees
and online banking or large employer).

PINs, passwords, and other shared-secrets are a form of something you
know authentication. futhermore, multi-factor authentication (like
both a card and a PIN) is considered more secure, assuming that the
different factors are subject to different kinds of
threats/compromises.

however, the proliferation of shared-secret authentication has
overloaded standard human factors ... potentially facing having to deal
with large tens or maybe hundreds of shared-secrets, the prevailing
human response is to start recording the numerous values (no longer
being able to remember all the something you know values). this shows
up in studies of card-based implementations with accompanying PINs,
where something like 1/3rd of the cards have the corresponding PINs
written on them.

the other characteristic is that the various skimming attacks (of valid
transactions) can represent a common threat/vulnerability against
PIN-based card operation (negating assumptions about th security
strength of multi-factor authentication), i.e. all the information to
perform a fraudulent transaction can be gathered at one time.

however, this wasn't even necessary in the YES CARD scenario. The
standard valid card requiring a pin (assuming the PIN hasn't been
written on the card) is a countermeasure to lost/stolen card. However,
in the YES CARD skimming, it wasn't even necessary to record the valid
pin ... since the terminals would accept the counterfeit YES CARD
telling them that YES, the PIN was valid (regardless of what was
entered).
http://www.garlic.com/~lynn/subintegrity.html#yescard

The new urgency to fix online privacy

Steve O'Hara-Smith <steveo@eircom.net> writes:
Well yes - but an effect similar to Heisenburg applies to this. You
may find out that the account was good for $1000 - but you have now eaten
that and cannot tell how much is left until it says no. You can of course
start high and work down - but velocity checks in the authorisation system
will spot you before many iterations.

long ago and far away ... one of the fraud patterns for lost/stolen card
was $5 at selfserve gas pump followed within 20mins with $100+ athletic
shoes ... the selfserv gaspump was low vulnerability quick getaway to
see if the account for the lost/stolen card had already been flagged.

the account flagging countermeasure works a lot better in the
lost/stolen card scenario than the skimming scenario ... in the skimming
scenario, something lost might not be realized until the next statement
(the loss tends to be reported a lot earlier in lost/stolen case, so the
possible fraud interval is greatly shortened).

avg. debit card skimming losses have been pegged up around $1000/account
(in part because there may be longer delay before reporting suspicious
activity)

The new urgency to fix online privacy

Steve O'Hara-Smith <steveo@eircom.net> writes:
[1] authorisation rather than reservation is the term used by the
processing organisations I've worked with.

there is even institutionalized $1 auth ... i.e. authorization does
reduce the available credit, however the $1 auths aren't settled ... so
it never shows on statement ... and the auth eventually expires and
available credit goes back up.

some of the other organizations that were participating in the
electronic signature legislation were also involved in the data breach
and security breach disclosure legislation effort and had done detailed
consumer surveys and studies related to that effort. the primary
concerns that were raised with regard to personal information disclosure
was 1) enabling fraud and 2) could be used by organization and
institutions for denial of service.

a lot of churn and swirl around privacy frequently fails to establish
any priority or ranking as to different kinds of threats and
vulnerabilities related to different kinds of personal information
disclosure.

After the Data Breach: Navigating State Disclosure Laws
http://www.ecommercetimes.com/story/60257.html

from above:
Large or small, companies should plan ahead to lessen the burden of
notification in the event of a data breach. "Encryption is the single
most effective way to avoid the negative business impact of data
breaches," says Robert Scott, managing partner at the Dallas office of
Scott & Scott, a law and IT services firm.

we had taken a slightly different approach in the mid-90s ...
recognizing that there was diametrically opposing requirements for
account transaction related data (needing to be both readily available
and at the same time, kept confidential and never divulged to anybody)
... the approach was to drastically reduce the threats and exploits
associated with the most common data breaches (i.e. make the information
useless to attackers for the purposes of performing fraudulent
transactions).

this was the basis of the periodic comment (regarding account
transaction data) that even if the planet was buried under miles of
information hiding encryption, it still couldn't prevent the information
leakage.

from above:
It is heavily challenged in the practical world in two respects: the
(human) language is opaque and the ideas are simply not widely
deployed. Consider this personal example: I spent many years trying to
figure out what caps really was, only to eventually discover that it
was what I was doing all along with nymous keys. The same thing
happens to most senior FC architects and systems developments, as they
end up re-inventing caps without knowing it: SSH, Skype, Lynn's x95.9,
and Hushmail all have travelled the same path as Gary Howland's nymous
design. There's no patent on this stuff, but maybe there should have
been, to knock over the ivory tower.

from Coyotos history page ...
Coyotos is the successor to the EROS system, which is in turn the
successor to the KeyKOS system. Since the system inherits 30 years of
prior research and development history, it seems appropriate to
briefly describe some of that history and the prople who contributed
to it.

... snip ...

more from Coyotos history page ...
My own contact with this work came in 1990. As a co-founder of HaL
computer systems, I became involved in evaluating various operating
system platforms for use by HaL. In 1990, UNIX robustness wasn't
great, and we hoped to find something that would be largely operator
free and highly robust. Key Logic made a presentation to us about
KeyKOS. For reasons that were largely political, HaL decided not to
gamble on KeyKOS, but I became convinced that KeyKOS offered something
worthwhile.

... snip ...

for other trivia, the "H" in "HaL" had been head of the austin
workstation division (in an earlier life, for a time, I had been his
only direct report) and the "L" had come from SUN.

there use to be a joke in the valley that there were only 200 people
in the business ... they kept moving around, so it just appeared like
there were more.

1. low security, which is characterised by the coolness world of PHP
and Linux: shove any package in and smoke it.
2. medium security, characterised by banks deploying huge numbers of
enterprise apps that are all at some point secure as long as the bits
around them are secure.
3. high security, where the applications are engineered for security,
from ground up.

The Internet as a whole is stalled at the 2nd level, and everyone is
madly busy fixing security bugs and deploying tools with the word
"security" in them. Breaking through the glass ceiling and getting up to
high security requires deep changes, and any sign of life in that
direction is welcome. Well done Google.

mentioning personal computer heritage is stand-alone machine, where
numerous applications were accostomed to taking over the whole machine.

now a recent article with slightly different perspective

Microsoft not letting the door hit former employees on their way to Google
http://valleywag.com/tech/exits/microsoft-not-letting-the-door-hit-former-employees-on-their-way-to-google-320493.php
Microsoft's Treatment of Google Defectors
http://slashdot.org/articles/07/11/11/1341256.shtml

from above:
Anyone leaving Redmond for the search leader is a threat. Not because
they'll scurry around collecting company secrets — as if Google's
interested in Microsoft's '90s-era technologies. Departing employees,
however, might tell other 'Softies how much better Google is.

... snip ...

Oddly reminds me of the jan96 microsoft developer's forum at moscone
... while the internet was mentioned ... the theme was all about
protecting the developers' (enormous) investment (in visual basic). a
couple past posts mentioning the conference theme:
http://www.garlic.com/~lynn/2004k.html#32 Frontiernet insists on being my firewall
http://www.garlic.com/~lynn/2004l.html#51 Specifying all biz rules in relational data

tidbits from above:
With Oracle likely to sell more than $18 billion in software this year,
it's hard to believe the world's second-largest software company in its
infancy in 1977 had $2,000 pooled by its four founders. And its first
"CFO" was the accounting student who delivered pizzas to the startup.

from above (speaking of oracle running on pdp-11):
Mike Blasgen: I don't remember; probably 1979 or 1980. The thing that
impressed me the most was that it ran on a little PDP-11. The machine
looked to be the size of a carton of cigarettes. It must have been an
LSI-11 version of the machine, if my recollection of the size is
correct. And System R at the time in most of our joint studies and at
IBM was running on 168s. Now a 168 is only maybe the power of a 486DX2
or something, but the fact of the matter is it was a huge machine which
would probably not fit in this room.

... snip ...

another interesting tidbut from above:
Roger Bamford: ... At the time that I joined they were embarking on
this portability strategy, which actually made a lot of sense, because
hardware was expensive in 1984, and by making the software portable, you
could essentially commoditize hardware. Which is what Oracle did, and
that created a lot of revenue potential for Oracle, because they got
back the money that the customers were saving by going to open
systems.

... snip ...

for other topic drift (also from above):
Brad Wade: Well, when was Ted Codd made an IBM Fellow?

Mike Blasgen: 1976.

Brad Wade: I remember the reception they had for him in the Building 28
Cafeteria. At that time he said, "It's the first time that I recall of
someone being made an IBM Fellow for someone else's product." It was
Oracle's.

Costs extra. Some Multics software was not "bundled" with the hardware
purchase, but instead had an additional charge. Typically there would
be several prices: a large amount for a one-time paid-up license, or an
initial fee and then a monthly license charge. This practice, of
unbundling software and leasing it to the customer for a monthly fee,
was introduced by IBM about 1970, and represented a radical shift in
computer finance. Multics took to it reluctantly. It led to
complications, since we had to avoid dependencies from standard
software on unbundled products: for example, we couldn't use MRDS to
store accounting data and produce reports. Unbundled software was
stored in >system_library_unbundled, also called >unb.

including getting involved in dependencies between standard software on
"unbundled products" ... when I release my resource manager (guinea pig
for starting to charge for some types of kernel software, not just
application software) ... which included a lot of stuff that was
required for multiprocessor support (which was "free").
http://www.garlic.com/~lynn/subtopic.html#fairshare

from above, about m'soft contracting for RDBMS from Sybase:
Jim Gray: And Microsoft took their code and sold it on OS/2. The reason
for that was that about 1986, IBM was trying to take over the PC market,
and they had their own operating system - OS/2 - they had their own
hardware. Microsoft said that they had to somehow protect themselves
against something called OS/2 Extended Edition. There was going to be
this thing called OS/2, which was basic OS/2, and then Extended Edition,
which was going to cost hardly anything more, was going to have a
database system in it, and compilers, and query - QBE was going to be
built into it, and all sorts of stuff. So Microsoft felt they had to
have something like that. So they went to Sybase and said, "We'll get
our SQL engine from the Sybase guys, and that will be our Microsoft
Extended Edition." And Microsoft remarketed Sybase in the OS/2
world. The relations between Microsoft and Sybase were not warm or
cordial. When it came time to port Sybase to NT, Sybase let Microsoft do
the job. And then there was a divorce at some point, similar to the IBM
divorce about OS/2, that IBM would do OS/2, and Microsoft would go its
own way. There was a similar divorce vis-a-vis Microsoft, where
Microsoft now owns the Sybase code, so the Microsoft SQL Server now is
going its own way, and they've made it more SQL-compliant, and they're
adding GUIs to it, and so on. It's now a major force in this whole
database world. And the thing that's driving everybody crazy I believe
in the database world is, this thing is very cheap. It's, order, five
thousand dollars for a server, as opposed to a hundred thousand dollars
for a server. This server is capable of doing hundreds of transactions a
second. Scary. Pat, did I ... ?

from above:
The America Competes Act authorizes $33.6 billion in new funding for
three broad areas: increasing research investment; strengthening
science, technology, engineering and mathematics (STEM) education from
elementary through graduate school; and promoting innovation. It creates
at least 40 new federal programs.

from above:
The researchers found the security loophole in the random number
generator of Windows. This is a program which is, among other things, a
critical building block for file and email encryption, and for the SSL
encryption protocol which is used by all Internet browsers. For example:
in correspondence with a bank or any other website that requires typing
in a password, or a credit card number, the random number generator
creates a random encryption key, which is used to encrypt the
communication so that only the relevant website can read the
correspondence. The research team found a way to decipher how the random
number generator works and thereby compute previous and future
encryption keys used by the computer, and eavesdrop on private
communication.

... snip ...

[ClassicMainframes] multics source is now open

Peter.Farley@BROADRIDGE.COM (Farley, Peter x23353) writes:
Thanks a lot for the info and the link. Most interesting. Another
important piece of computer history available to the world at large. Bravo
to Bull for releasing it.

It would be an interesting project to write the emulator for that machine
architecture.

I ran into a similar problem when my resource manager was selected to be
guinea pig for starting to charge for kernel software. I had a lot of
kernel restructuring (in the resource manager) for multiprocessor
operation. This created a problem for bundled/free multiprocessor
operation needed lots of code from the resource manager.
http://www.garlic.com/~lynn/submain.html#unbundle

Frank McCoy <mccoyf@millcomm.com> writes:
It's rather like the present system of fixed versus variable-rate loans.
Variable-rate-loans are attractive to *banks*, because *they* are
protected if interest rates rise. Thus they offer lower overall rates
for such loans. OTOH, fixed-rate-loans are attractive to *customers*
because it protects the borrower from interest-rate-increases, while the
bank loses (comparatively). So, the bank charges a higher rate for such
loans.

modulo not doing detailed analysis and variable rate loans almost took
citibank down in the '80s (after which they totally got out of
the home mortgage business) ... long winded post recently referenced
a number of times
http://www.garlic.com/~lynn/aepay3.htm#riskmThread Between Risk Management and Information Security

the current scenario involving credit backed securitized instruments ...
has quite a bit of variable rate loans as the root (although in
conjunction with introductory subprime teaser rates for the variable
rate loans). this morning on one of the financial channels quoted
probabilities for internet-based financial players in the subprime
market actually going bankrupt (related to clients just wanting to bail
as quickly as possible w/o waiting to see how polluted some of the
holdings actually are). recent posts with references to these
securitized credit instruments possibly obfuscating risk assessments
http://www.garlic.com/~lynn/2007p.html#50 Newsweek article--baby boomers and computers
http://www.garlic.com/~lynn/2007q.html#7 what does xp do when system is copying
http://www.garlic.com/~lynn/2007q.html#41 Newsweek article--baby boomers and computers
http://www.garlic.com/~lynn/2007r.html#60 Fixing our fraying Internet infrastructure

Oracle will supply preconfigured images -- or virtualized files that
combine the Oracle database with a preconfigured version of Linux -- for
ease of installation and deployment. The move is Oracle's way of picking
up on the use of virtualized appliances, software preconfigured with an
operating system to run in a virtual machine.

Anne & Lynn Wheeler <lynn@garlic.com> writes:
modulo not doing detailed analysis and variable rate loans almost took
citibank down in the '80s (after which they totally got out of
the home mortgage business) ... long winded post recently referenced
a number of times
http://www.garlic.com/~lynn/aepay3.htm#riskmThread Between Risk Management and Information Security

however the article goes on
http://www.extremetech.com/article2/0,1697,1156606,00.asp
The Genesis of Virtual Machines

The idea of a virtual machine is not new--its roots actually go back
almost to the beginning of computing itself. Initially, the concept of a
virtual machine came about in the 1960's on mainframes as a way to
create less complex multi user time share environments.

with a footnote from Melinda's tome about when Creasy had decided to
build the first virtual machine system, CP40
Creasy had decided to build CP-40 while riding on the MTA. "I launched
the effort between Xmas 1964 and year's end, after making the decision
while on an MTA bus from Arlington to Cambridge. It was a Tuesday, I
believe." (R.J. Creasy, private communication, 1989.)

Intel Ships Power-Efficient Penryn CPUs

Walter Bushell <proto@oanix.com> writes:
Weren't we making the transition from small to large scale integrated
circuits about 1967? No, a brief google says that was in the mid 70's.
Just plain integrated circuits in 1967 then and many machines extant
with discrete transistor logic. LSI is "tens of thousands of
transistors".

on a 360/40 with custom modified virtual memory hardware. when 360/67
with standard virtual memory became available, cp40 morphed into cp67

another cp67/cms reference from the multics website (i.e. multics was
on the 5th flr of 545 tech sq, the science center was on the 4th flr of
545 tech sq, and the science center machine room with 360/67 was on 2nd
flr of 545 tech sq)
http://www.multicians.org/thvv/360-67.html

Anne & Lynn Wheeler <lynn@garlic.com> writes:
the interviewer asked what are the possible reasons for the shortfall in
investments. the "specialist" explained that one reason is that 1/2 of
the production project specialists will reach retirement age over the
next three years and there wasn't enough talent to undertake additional
projects that typically take 7-8yrs.

from the article ...
Sixty-one percent of federal managers say their agencies do not have
knowledge management policies to help prepare for the impending
brain-drain, according to a recent survey.

...

Organizations "can't afford a single gap of knowledge," said Joel
Brunson, president of Tandberg's federal market business. However, when
a mass exodus of workers do leave their jobs, "it's about losing
day-to-day knowledge, tricks of the trade," and an accumulation of
what's been learn over 25 to 35 years, he said.

Marty Zimelis wrote:
Bob,
Right name, but I believe the wrong derivation. The "67" in CP-67 comes
form the fact that it ran on the S/360 model 67, the only production model
of the S/360 line that implemented Dynamic Address Translation (DAT) --
virtual storage.

Some would argue that was the first version of VM. Others would argue
that the line starts with VM/370, the first generally available version of
VM, which was first released in August of 1972. (FWIW, SHARE has been
celebrating VM's birthdays using the VM/370 release date as the origin.
Hence the 35th birthday was celebrated at SHARE 109 in San Diego last
Summer.)

CP40 predated CP67. Cambridge Science Center had cp67 up and running
and had also installed it out at Lincoln Labs. The last week in Jan68,
three people came out to install it at the university where I was an
undergraduate. I was then invited to attend the spring 68 SHARE
meeting in Houston where cp67 was "officially" announced. In that
sense, the univ. was early "beta test" for cp67. For other topic
drift, the univ was also "best test" site for original CICS ... and I
got tasked to support/debug also ... misc. past posts mentioning CICS
http://www.garlic.com/~lynn/submain.html#bdam

I had been doing various work on os360, including a lot of workload
throughput optimization. When CP67 was installed, I also started doing
some work on it ... and then made a presentation on some of the work
at the fall68 SHARE meeting in Atlantic City. Old post with part of that
presentation
http://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14

The cp67 group "split" off from the science center and took over the
(IBM) Boston Programming Group on the 3rd flr of 545 tech sq; science
center was on the 4th flr, science center machine room was on the 2nd
flr.
http://www.garlic.com/~lynn/subtopic.html#545tech

In the morph from cp67 to vm370, the group continued to expand,
eventually outgrowing the 3rd flr and moved out to the old SBC bldg in
Burlington Mall. During this period the company (and some amount
of the vm group) got distracted by the Future System effort
http://www.garlic.com/~lynn/submain.html#futuresys

When FS was finally killed, there was a mad scramble to get things
back into the 370 hardware and software product pipeline. Possibly
somewhat as a result, the development group picked up quite a bit of
stuff that I had been doing and shipped it in vm370 release 3. Then
there was also a decision to release other stuff that I had been doing
as the resource manager. Misc. posts
http://www.garlic.com/~lynn/subtopic.html#fairshare

It was also in this time-frame that the internal scramble was on to
get going on MVS/XA. POK finally convinced the company that it was
necessary to kill the vm370 product, shutdown the burlington mall
location and transfer all the people to POK as part of being able to
meet the MVS/XA delivery schedule. Eventually, Endicott was able to
salvage the vm370 product mission ... but effectively had to rebuild
an organization nearly from scratch.

40th anniv. of when I first got acquainted with cp67 is coming up in
two months ... and the 40th anniv of cp67 announcement is later next
spring.

For other drift, 23jan69, the company announced unbundling
... somewhat as the result of various litigation going on. However,
the case was made that unbundling and starting to charge separately
for software only applied to application software; kernel software
still needed to be "bundled" with the machine (and "free").

which talks about adding TTY/ascii terminal support to cp67 and coming
up against some 2702 controller limitation. As a result the
univ. kicked off a project to build a clone controller (using an
Interdata/3 minicomputer) ... which subsequently got written up
blaming four of us for clone controller business.
http://www.garlic.com/~lynn/subtopic.html#360pcm

Anyway, some case can be made that as the result of clone controllers,
resulting in the corporation's effort for the Future System activity
... allowing the 370 product pipeline to somewhat go bare ... helped
provide an opening for clone processors in the 70s.

In any case, as I was about to release the resource manager and in
response to clone processors ... the corporation made a decision to
start transition to charging for kernel software ... and the resource
manager was selected as guinea pig. as a result, I got to spend a lot
of time with business people and lawyers over a period of several
months, helping figure out policies for kernel software
unbundling/charging.
http://www.garlic.com/~lynn/submain.html#unbundle

One of the issues with early kernel unbundle was there was some kernel
software that was bundled and some that was not, but a policy decision
was that "free" kernel software couldn't have dependency on
unbundled/priced kernel software. Unfortunately, I had included quite
a bit of multiprocessor kernel reorg as part of the resource manager.
This created a delima when it was decided to go ahead and release
vm370 multiprocessing support (bundled/free ... but couldn't require
the priced resource manager as a dependency)

Another part of unbundling was that SE services started to be charged
for. Up until that time, a lot of new SEs got their training "on the
job" at customer sites (as part of a SE team, sort of apprentice type
program). With unbundling, that came to a halt. To somewhat compensate
a program was started called HONE (Hands-On Network Environment) which
was going to have several cp67 installations around the US and branch
SEs could remotely log in and gain experience running various
operating systems in virtual machines.

However, one of the other things that the science center had done was
to port apl\360 to cms ... and the dataprocessing organization
starting using HONE to host a number of sales and marketing support
applications implemented in cms\apl. Eventually this grew to dominate
all HONE useage and the original HONE purpose dwindled away. HONE then
made the transition from cp67 with cms\apl to vm370 with apl\cms
... and numerous clones were created around the world. one of my
hobbies was building highly customized kernels ... which i provided and
supported to various internal organizations, including HONE. Some
number of the HONE clones, I personally installed. One of the first
was when EMEA hdqtrs moved from the US to La Defense (just outside
paris). Also at some point, it was not even possible to submit
customer machine orders without having first being processed by some
HONE application. misc. past posts mentioning HONE and/or APL
http://www.garlic.com/~lynn/subtopic.html#hone

File sharing may lead to identity theft

hancock4 writes:
I'm not exactly sure what the difference is between "file sharing" and
a plain download. Apparently "file sharing" is what people do to
download music and videos, sometimes without properly paying for
them. Could someone elaborate on the technical differences?

download implies a pull operation from the server to the client.
servers open themselves up to connections from external sources and
frequently have various kinds of firewall technologies that limit the
kind and scope of incoming connection requests (lots of systems having
numerous kinds of deficiencies and vulnerabilities in parts of the
system supporting incoming connection requests ... and the widespread
vulnerabilities in this system area being one of the original
motivations for various kind of firewalls).

filesharing implies that these clients start acting as servers and
other clients can create connections and retrieve files. frequently
this may involve disabling various kinds of defenses and firewalls
(that turn-off incoming connection requests).

proxies and firewalls were already starting to become standard
countermeasure for attaching to the internet. however, in doing
detailed studies about threats and vulnerabilities for such servers
... one item found that resulted in successful exploits of systems on
the internet was "maintenance".

basically the process involved disconnecting from the internet,
disabling all the defenses as part of doing maintenance, and when all
done ... for various reasons, they forgot to re-enable the defenses
before reconnecting to the internet.

hancock4 writes:
Could you translate into layman's terms? What exactly is "server
virtualization software"?

The concept of "virtual storage", as I understand it, is making an
application program think it had more core ('RAM') memory than it had
by using disk storage for parts of a program not being used at that
moment. It's power was limited as trying to compress too much of a
program slowed it down significantly. To me, today the concept is
almost obsolete since RAM memory is damn cheap, measured in many
hundreds of meagabytes. Virtual was developed when memory was still
in kilobytes or at best a few megabytes.

Anyway, I don't see how the above definition applies today to
something like Oracle.

How does this new product make it easier and more efficient for data
processing centers?

... and then you added/installed applications afterwards ... hoping that
you got the correct level of application software that worked with the
specific operating system and processor (not only initial install but
also life-cycle maintenance going on independently for all the
components).

then things have somewhat gone back and forth between having operating
systems pre-installed on you hardware and consumer doing after market
operating system installs, upgrades, re-installs, etc. (and hoping that
the application software, operating systems, and processors still were
all compatible and worked with each other).

also over the years, monolithic operating systems have tended to get
more and more complex.

when we did virtual machine hypervisor ... the hypervisor was a much
simpler set of software with a very well defined interface (minimizing
complexity and incompatibilies).

we then found out that we could do something called service virtual
machines ... basicly a stripped down, simplified operating system
... somewhat tailored to run in a virtual machine ... and targeted at
doing only one kind of specific task.

in the rebirth of virtual machine technology, the service virtual
machine concept is sometimes being referred to as virtual appliance
... rather than having one large, monolithic, complex (and error prone)
body of (kernel) software ... there were highly optimized, stripped
down, simpler software targeted at done only one (or very few) kinds of
tasks.

so one thing that can be done ... rather than having a large number of
the different possible (monolithic, complex, error prone) "operating
systems" ... where something like oracle ... has to deliver an
application installation process that tries to adapt the oracle
application to all the idiosyncrasies of the "operating system" it might
be installed on.

Oracle can ship a preconfigured oracle virtual appliance executable
image ... that includes its own stripped down, highly optimized and
highly tailored kernel (as part of the executable image) ... all set up
for operation in a virtual machine. This can drastically reduce
life-cycle maintenance headaches (with applications and operating
systems getting out of sync and developing incompatibilities).

This not only reduces the ongoing lifecycle headaches ... but can also
reduce the total cost of ownership as well as skill levels at customer
sites required for its care&feeding.

hancock4 writes:
Could you translate into layman's terms? What exactly is "server
virtualization software"?

The concept of "virtual storage", as I understand it, is making an
application program think it had more core ('RAM') memory than it had
by using disk storage for parts of a program not being used at that
moment. It's power was limited as trying to compress too much of a
program slowed it down significantly. To me, today the concept is
almost obsolete since RAM memory is damn cheap, measured in many
hundreds of meagabytes. Virtual was developed when memory was still
in kilobytes or at best a few megabytes.

Anyway, I don't see how the above definition applies today to
something like Oracle.

How does this new product make it easier and more efficient for data
processing centers?

Endicott was working on the follow-on to 135/145 ... virgil/tully ...
which had spare room for microcode. There was starting to be mid-range
clone processor competition (primarily outside the US) and was looking
for new added value features ... in addition to simply better
price/performance. They had done a VS1 (kernel) microcode assist and
approached the VM group out in burlington about doing a VM370 microcode
assist.

The VM group turned them down, saying that they were too busy doing
other stuff. As a result, they eventually showed up on my doorstep.
Old post with some results into initial investigation into selecting
portions of vm kernel to "drop" into mcode:
http://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist

In addition to the other stuff I was doing in the same timeframe, not
only did i get roped into working on design for ECPS VM microcode
assist ... Endicott then wanted me to run around the world with them
to explain what it all met to the business and forecasting people in
different countries around the world. unlike domestic/US, world trade
countries would forecast sales for the upcoming year ... which then
turned into build orders at manufacturing plants ... which were then
bought/delivered to the countries ... which in turn had to be sold to
customers (by contrast, domestic forecasts were not directly tied to
actual sales volume ... so manufacturing plant sites had to do
significant amount of investigation regarding US forecasts since the
plant sites would have to "eat" any inaccuracies).

It turns out that 138/148 (virgil/tully) was just on the leading edge
of shift from hardware costs dominating customer budgets to
change-over to people costs starting to dominate (and skill
availability representing bottleneck to customer installs). As a
result, Endicott pushed hard for having VM370 preinstalled and
transparently integrated into every 138/148 shipped from the factory
(slightly akin to LPARS in the current generation of mainframes). The
problem was that large portions of the corporation viewed vm370 as
"competitive" with other operating system offerings and for one reason
or another were out to kill the product. Having vm370 preinstalled
and transparently integrated into every 138/148 shipped ran counter to
this other political forces (for instance POK was in the process of
making the case for killing off vm370 and having all the people in the
burlington mall group transferred to pok as part of helping mvs/xa
schedules). In any case, the vm370 preinstall and transparently
integrated for every 138/148 was shot down.

this market segment then started transition to workstations and large
PCs ... so that the 4331/4341 followons, 4361/4381 never saw the
success of its predecessor (vax sales saw similar effect)

for other topic drift, another group wanted to do vm370-only, 5-way
smp also approached me about the same time ... so i was doing a lot of
work on that project at the same time I was doing the endicott ecps
related stuff (and all the other kernel modifications and enhancements
mentioned in the previous post, including leading up to the releasing
the source manager). the 5-way smp project eventually got canceled
before shipping misc. past
posts about the http://www.garlic.com/~lynn/submain.html#bounce

the 5-way smp project included significant microcode capability ...
so i moved a lot of i/o scheduling, dispatching, and other kernel
operations into the microcode as defined functions (different approach
than ecps which was moving very targeted short snipets of kernel code
into microcode). the i/o scheduling has some characteristics of what
was later seen in 370/xa ... and the dispatching bore some resumblance
to the later work done in i432 (making management and number of
processors somewhat transparent to the kernel code). all of these
functions could execute concurrently on different processors.

after 5-way project was killed, a quick&dirty version was adapted
to vanilla vm370 kernel running on standard 370 multiprocessors (w/o
all the microcode capability). This involved about 6000 bytes of
kernel code that would execute concurrently with fine-grain locking on
multiple different processors. However, the majority of the kernel smp
support was done with traditional single kernel lock that was
state-of-the-art in the period.

The were some differences, i.e that standard single kernel lock (of
the period) was used for all of the kernel, and processors on entry to
the kernel would "spin" on the lock until it was made available
(effectively the kernel would only be executing on single processor at
a time).

The adaption of the VAMPS design had fine-grain locking
multiprocessing changes for only (very) small portion of the kernel
that represented the majority of time spent in the kernel. I didn't
have to make go thru the rest of the kernel making multiprocessors
changes ... since it continued to run on only one processor at a time
(with single kernel lock). I contended it provided nearly the thruput
of having completely modified the whole kernel for fine-grain lock and
parallelism ... while requiring a small fraction of the source code
changes.

The other difference was that there was a extremely light-weight
queuing mechanism ... instead of single kernel "spin-lock" (for the
majority of the kernel code) ... it was something that I originally
called a bounce lock ... i.e. an attempt was made to obtain the
kernel lock, and the processor couldn't obtain the lock, it would
queue a request (for the kernel lock) and go off to the dispatcher to
look for other kinds of work.

and created a dilemma when it was decided to release standard product
multiprocessor support (the resource manager was "unbundled" and
charged for ... while the multiprocessor support was to ship as
bundled/free).
http://www.garlic.com/~lynn/submain.html#unbundle

2 byte interface

Paul Hinman <paul.hinman@shaw.ca> writes:
Back in the olden days some high speed devices required the "2 byte
interface" on the channel. I believe that this was a requirement for
connecting the 2305 FHSD to the 2880 block mutliplexor channel. I
assume that this meant some changes/additions to the channel itself.
In terms of the actual device connection instead of a bus and tag
cable dit it require 2 bus cables and a single tag cable.

earlier 3mbyte/sec interface ... before data-streaming 3mbyte/sec
... aka standard channel interface had been handshake on every byte
transferred (with accompanying signal and processing latency). later
3mbyte/sec "data-streaming" (used for 3880 disks) relaxed the
hand-shaking on every byte ... allowing doubling the transfer rate
... as well as doubling maximum channel lengths from 200' to 400'

i never saw a 2305-1 but my impression was that it had the same number
of heads as 2305-2, but pairs of heads were configured on opposite
side of the disk surface. only needed quarter rotation to get desired
record under head ... and then transfer on two heads in parallel.

in past threads, somebody mentioned that an external crypto device
also operated at 3mbyte/sec.

The new urgency to fix online privacy

"Rostyslaw J. Lewyckyj" <urjlew@bellsouth.net> writes:
Curious that the revelation/warning is not considered very noteworthy
by the SANS security organization, and no promises of fixes or even
acknowledgment from MS.

CSA 'above the bar'

shmuel+ibm-main@PATRIOT.NET (Shmuel Metz , Seymour J.) writes:
PSA is real address 0; it's absolute address 0 for at most one of the
processors in the complex. Neither real nor absolute addresses are virtual
addresses, and the mapping of virtual 0 to real 0[1] is strictly a
software convention.

multiprocessor support required a unique PSA for every processor.

in 360 multiprocessor, the prefix register (for every processor)
contained the "real" address of the PSA for that processor; different
processors chose different "real" addresses for their PSA ... so as to
have a unique PSA for every processor in the complex. the real, "real"
page zero was no longer addressable (assuming every processor chose some
other "real" address than zero).

this was modified for 370, the prefix register specified the "real"
address of the PSA for that processor (as in 360) ... however, if the
processor addressed the address in the prefix register, it would
"reverse" translate to real page zero. as a result, the real, real page
zero could be used as common communication area between all processors
in the complex ... and was addressed by using the address in the
processor's prefix register.
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/dz9zr003/3.7?DT=20040504121320

from above:
1. Bits 0-50 of the address, if all zeros, are replaced with bits 0-50
of the prefix.

2. Bits 0-50 of the address, if equal to bits 0-50 of the prefix, are
replaced with zeros.

3. Bits 0-50 of the address, if not all zeros and not equal to bits 0-50
of the prefix, remain unchanged.

... snip ...

#1 & #3 was how things operated in 360 multiprocessor support; #2
was introduced with 370 multiprocessor support (modulo 360 & 370
were 4kbyte pages & 24bit real addressing while the above description
is for 64bit Z and 8kbyte pages)

one could claim that the relationship of cp67 to vm370 is somewhat like
the relationship of HASP to JES2. misc past posts mentioning HASP,
JES2, and/or JES2/HASP networking support
http://www.garlic.com/~lynn/submain.html#hasp

other cp67 heritage ... in the transition from MVT to os/vs2 (i.e.
os/360 with virtual memory support) ... basically MVT was laid out in
single virtual address space ... thus the reference to OS/VS2 SVS
(single virtual storage) to distinquish from later OS/VS2 release MVS
(multiple virtual storage).

One might claim that there was little difference between OS/VS2 SVS
and MVT with VM handshaking laid out in 16mbyte virtual address
space. The biggest difference was needing to have channel program
translation built into MVT. The initial prototypes of OS/VS2 SVS was
built with minimal virtual address space support and a copy of CP67
CCWTRANS (and a couple other CP67 routines associated with channel
program translation) hacked into the side.

used in lots of transition to operating in virtual memory environment
for 370. the science center had a number of efforts going on in the area
of system and performance monitoring, modeling, and simulation (some of
it being the runup to capacity planning). one of the projects involved
tracing instruction and data storage references and then doing
semi-automated program reorganization to optimize for operation in
virtual memory environment. This was used for several yrs internally
before being turned into product and released to customers as
"VS/REPACK" (two months before my vm370 resource manager was first
released).

An early version of the technology was used to help in the morph of
apl\360 to cms\apl (originally on cp67/cms) ... which required
completely redoing the apl storage allocation and garbage collection
implementation for operation in virtual memory environment.

vs/repack was also used by a number of product groups ... not only for
helping with transition from real storage to virtual memory environment
... but also for things like application "hot spot" identification
(i.e. where program is spending a lot of its time). For instance it was
used in STL by the IMS development group for extensive studies of IMS
operation and performance.

another tool was an system performance analytical model implemented in
APL which was eventually made available as sales and marketing support
tool on (internal cms-based timesharing service) HONE
http://www.garlic.com/~lynn/subtopic.html#hone

as the performance predictor. branch people could input customer
configuration and workload details and ask "what-if" questions about
what would happen if there were configuration and/or workload changes.

from above:
"IBM has been delivering virtualization capabilities for more than 40
years and today we unveil a milestone in the area of data storage
virtualization with the shipment of 10,000 storage virtualization
engines -- a fact no other storage company in the world can claim," said
Kelly Beavers, Director, Storage Software, IBM. "By working across
multiple platforms, IBM's storage virtualization helps to lower energy
costs and unlocks the proprietary hold that other storage vendors have
had on customers for years -- which IBM believes makes storage
virtualization the killer application in the storage industry over the
next decade."

had moved cp40 to 360/67 for cp67 and also had it installed out at
lincoln labs. it wasn't installed at univ. where i was undergraduate
(3rd installation) until last week in jan68 ... and it wasn't
"officially" announced until spring share in houston, 1st week
mar68. minor reference mentioning 35th anniv of cp67 announce
http://www.garlic.com/~lynn/2003d.html#72 cp/67 35th anniversary

Brian Inglis <Brian.Inglis@SystematicSW.Invalid> writes:
They were used for timesharing bureaux, so accessible from terminals at
work and later home, and later PCs running terminal emulators.
They were also available to run directly on PCs using add-in 370
coprocessor cards.

"Server" processors for numbercrunching?

nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
I don't know when IBM 'discovered' FLOPS, but my recollection is that
it really only gained currency with the Whetstone benchmark, and it
most definitely was FLOPS or KiloFLOPS then (case ignored). My
recollection is that IBM remained wedded to MIPS for at least a decade
after that.

there was 195 ... other than that, I didn't notice a lot until mid-70s
... HSM for 168-3 and some work done for 148.

the only reason that I noticed the 148 ... was I got involved in doing
ECPS microcode assist for 138/148 and then got dragged into selling the
machine internally around the world .. for a little topic drift recent post
http://www.garlic.com/~lynn/2007s.html#36 Oracle Introduces Oracle VM As It Leaps Into Virtualization

... other parts of the world, the low-end and mid-range much more
dominated the market. the machine had to be "sold" to the business and
market forecast people in the different non-us business operations
... since they were the people that would establish the number of
machines that would be ordered from manufacturing ... which, in turn,
would be the machines that would be available for the different
country salesmen to sell to customers. so there were these one-week
sessions at various countries (or country collection) meetings around
the world explaining why customers would order/replace/upgrade 138/148
(vis-a-vis 135/145, clone competitors, etc). One of the items that I
remember "selling" was that 148 increased floating point significantly
over 145 (much more than the 145->148 MIP rate increase, apparently
something that was important in some of the world trade market
segments).

nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
Yes. I was peripherally involved with Whetstone benchmarking from
1972 onwards, and was an IBM user (i.e. support staff) from the same
year (on a 370/165). I don't remembger IBM making much of Flops until
much later.

some of this may have been the difference between some of the machine
engineers and the specific people in sales&marketing in
(specific?) branch offices. the 148 engineers felt that they had
greatly improved things significantly via-a-vis 145 ... and it was one
of the things touted to business planning and forecasting people in
emea (europe, middle east, africa) and afe (asia and far east).

i've commented before about something analogous with respect to
commercial batch processing and timesharing ... that the commercial
batch processing market so dominated sales that hardly anybody
associates the company with timesharing ... some past comments about
cp67 & vm370 commercial timesharing offerings
http://www.garlic.com/~lynn/submain.html#timeshare

it wasn't that there wasn't significant timesharing ... but that the
commercial batch processing market was so enormously larger that
hardly anybody thinks about the timesharing aspect.

one of my sample comparisons was that the number of customer
commercial batch processing installations was enormously larger than
the number of customer timesharing installations, the number of
customer timesharing installations was significantly larger than the
number of internal timesharing installations ... and the total number
of internal timesharing installations was larger than the maximum
number of internal timesharing installations that i supported.

at one point, the total number of installations that I was providing
systems for was approx. the same as the total number of multics
installations that ever existed. this was semi-rivalry since the
science center (responsible for virtual machines, internal networking
technology, some amount of interactive computing, inventing markup
languages, etc) was on 4th flr, 545 tech sq
http://www.garlic.com/~lynn/subtopic.html#545tech

nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
Yes. I was peripherally involved with Whetstone benchmarking from
1972 onwards, and was an IBM user (i.e. support staff) from the same
year (on a 370/165). I don't remembger IBM making much of Flops until
much later.

801/risc went thru an early phase circa 1980 where it was going to
replace a wide variety of internal, custom microprocessor chips (for
instance the microprocessor for the 4341 follow-on was at one time
going to be 801/risc). several of those projects from the period
floundered.

another project was 801/risc ROMP ... which was a chip for the office
products division for the displaywriter followon. when that was
canceled, the group searched around for something else to use the
product for and somewhat fell into deciding on using it for (unix)
workstation market. it required some amount of tweaking the machine
and hiring the company that had done the at&t unix port for pc/ix
... to do something similar ... and it was announced as pc/rt and aix.

however, the unix workstation product somewhat fell into national labs
and technical, numerical intensive market segment ... where FLOPS were
significantly more important ... and for which PC/RT didn't do a very
good job. having found a niche market ... they then tried to do
significantly better for FLOPS with RIOS chipset (the romp follow-on)
that was eventually announced as power and rs/6000. misc. 801, risc,
romp, rios, iliad, fort knox, power, etc posts
http://www.garlic.com/~lynn/subtopic.html#801misc. old 801/risc email
http://www.garlic.com/~lynn/lhwemail.html#801

here was a product that was almost exclusively targeting the numerical
intensive market place ... and wasn't even suppose to stray (very far)
into commerical. one could even say that when we started doing scaleup
work ... some old medusa project email
http://www.garlic.com/~lynn/lhwemail.html#medusa

where the technology was moved over so that it would exclusive
concentrate on only the numerical intensive market segment (and we
were informed we couldn't work on anything involving more than four
processors)
http://www.garlic.com/~lynn/subtopic.html#hacmp

i can remember the difficulty when i was in paris in the early 70s,
doing a HONE clone install ... as part of emea hdqtrs moving from the US
to la defense (just outside paris) ... and trying to figure out how to
read my email back in the states. misc. posts mentioning hone
http://www.garlic.com/~lynn/subtopic.html#hone

the corporate history web page mentions that EMEA hdqtrs moved to La
Defense in 1983.

what i remember is doing a HONE clone install at La Defense in the early
70s as part of emea moving from US to La Defense. There were three
bldgs, possibly only one finished so that it was being moved into
... and ground surrounding the three bldgs were still bare dirt,
landscaping hadn't been done.

old email predating 1983 (but I'm not sure which bldg. I did the
install).
To: wheeler
From: somebody at ehqvm1

Hi all Need help in TCP/IP stack Rfcs

"linuxsrbabu@gmail.com" <linuxsrbabu@gmail.com> writes:
Can i get any chart or table which gives a brief overview of Tcp/ip
protocols RfcNos, that is main Rfc of the Respective protocol and its
update. can any one can me give url or any site where can i get it.
Please help me in this regard.

gilmap@UNIX.STORTEK.COM (Paul Gilmartin) writes:
I believe Cowlishaw's book reports that Rexx was developed in the VM
and MVS environments concurrently. It flourished in the former and
withered in the latter, less likely because CLIST fulfilled the need
better than EXEC2 than because less enthusiasm for innovation exists
in the MVS environment (case in point: TCP/IP). Rexx didn't resurface
under MVS until TSO/E.

one of the quotes:
Mike Cowlishaw had made the decision to write a new CMS executor on
March 20, 1979. two months later, he began circulating the first
implementation of the new language, which was then called "REX". Once
Mike made REX available over VNET, users spontaneously formed the REX
Language Committee, which Mike consulted before making further
enhancements to the language. He was deluged with feedback from REX
users, to the extent of about 350 mail files a day. By consulting with
the Committee to decide which of the suggestions should be implemented,
he rather quickly created a monumentally successful piece of software.

from one of the references in the above:
By far the most important influence on the development of Rexx was the
availability of the IBM electronic network, called VNET. In 1979, more
than three hundred of IBM's mainframe computers, mostly running the
Virtual Machine/370 (VM) operating system, were linked by VNET. This
store-and-forward network allowed very rapid exchange of messages (chat)
and e-mail, and reliable distribution of software. It made it possible
to design, develop, and distribute Rexx and its first implementation
from one country (the UK) even though most of its users were five to
eight time zones distant, in the USA.

having been larger than the arpanet/internet from just about the
beginning until sometime mid-85. part of the internal network issues was
that while there were mvs/jes2 nodes, nearly all the nodes were vm
... and the jes2 nodes had to be carefully regulated.

there were several issues around why jes2 nodes had to be carefully
regulated on the internal network (some independent of the fact that the
number of vm systems were significantly larger than the number of mvs
systems)

1) jes2 networking started out being some HASP mods from TUCC that
defined network nodes using the HASP psuedo device table ... limited to
255 entries ... 60-80 entries nominally taken up by psuedo spool devices
... leaving possibly only 170 entries for network node definitions

2) jes2 implementation would discard traffic if it didn't have either
the origin or destination node in local defintion. the internal network
had more nodes than jes2 could define for the majority of its lifetime
... so jes2 needed to be restricted to boundary nodes (at least not
discarding traffic just passing thru).

3) jes2 implementation had a number of other deficiencies, including
having confused header information as to network specific and local
process handling. different versions or releases with minor variation in
headers would bring down whole mvs system. even restricted to purely
boundary nodes, there is infamous story of jes2 upgrade in san jose
resulted in mvs system crashes in hursley. as a consequence there was
special vm drivers created for talking to mvs jes2 systems ... which
would convert jes2 headers to compatible format for the specific system
on the other end of the line. this was somewhat the side-effect of the
vm implementation having separated networking control information from
other types of information ... effectively providing a kind of gateway
implementation ... something not possible in the JES2 networking
infrastructure (including not having a way from protecting itself from
other JES2 systems, requiring intermediary vm systems to keep the JES2
systems from crashing each other).

at some point ... while VM could run native protocol drivers as well as
(multiple different) JES2 drivers ... JES2 could only run a specific
JES2 drivers ... it was decided to start shipping VM only with JES2
drivers (even tho the native VM protocol drivers were more efficient).
this was seen in the bitnet deployment
http://www.garlic.com/~lynn/subnetwork.html#bitnet

the vm tcp/ip product was developed in vs/pascal ... originally adapted
from pascal compiler developed in the los gatos lab for developing
(mostly vm/cms based) vlsi tools.

there were some thruput issues with the vm implementation getting only
about 44kbyte/sec thruput using nearly 3090 processor. i then did the
rfc 1044 implementation and in some tuning tests at cray research
between cray and 4341-clone, got 1mbyte/sec thruput (4341 channel media
thruput) using only a modest amount of the 4341 processor.
http://www.garlic.com/~lynn/subnetwork.html#1044

the base vm implementation was "ported" to mvs by doing a vm diagnose
implementation for mvs.

later there was a vtam-based tcp/ip implementation done by a
subcontractor. the folklore is that the initial implementation had tcp
thruput significantly faster than lu6.2. the subcontractor was then told
that everybody knows that lu6.2 is faster than tcp/ip and the only way
that tcp/ip would be faster is if it was an incorrect implementation
... and only a "correct" implementation was acceptable.

for some totally other drift ... early on, i wanted to show that rex(x) was
not just another pretty exec. i undertook to do a replacement for the
kernel dump analyser ... ipcs, which was mostly a large assembler
program. i wanted to demonstrate that in half-time over three month
period, i could re-implement ipcs in rex(x) with ten times the function
as well as ten times faster than the assembler implementation.
http://www.garlic.com/~lynn/submain.html#dumprx

this was eventually in use at nearly all internal installations,
although never released to customers.

Steve O'Hara-Smith <steveo@eircom.net> writes:
The problems with letting the patient hold the data are the risks
of loss and unskilled alteration. A decent logically centralised,
physically distributed access controlled store would be my preferred
solution.

problems not limited to loss ... but also phishing and things like
fraudulent alteration (and/or counterfeits)

long ago we were helping out with small conference that was pitching
chipcards and some number of other technologies. part of the attendees
were from county with large medical social services. they were
reasonably confident that they had a population of approx. 7mill
eligible for medical social services ... but had over 20mill
registered (i.e. each eligible person was registered an avg. of three
times). they had to go to logical centralized to eliminate huge
number of duplicates (quite a bit of the duplicates also involved
various kinds of fraud, while the avg. was 3/person, they found some
people having 30-40 or more different registrations).

effectively the web model is adaptable for such a operation ... for
instance, lots of the larger web services offer significant physically
replicated in large number of places ... and/or caching.

for other drift ... the chipcard scenario i've referenced numerous
times as the "offline" model vis-a-vis the "online" model. various
govs. and commercial entities have sunk billions into chipcard
technology ... targeted at the offline chipcard model.

one of the favorite chipcard scenarios is lengthy medical record
information can be loaded into personal chipcards for use by emergency
personal and/or first responders. this scenario presumes (spending
billions on the program) 1) emergency personal has time to access the
card, 2) information on the chipcard is useful/applicable for
emergency personal, 3) emergency personal have the facilities for
chipcard operation/access and 4) implicit in all this is that the
information is for "offline" environment by emergency personal having
capability to access the chipcard information AND NOT having realtime,
online access to qualifed medical personal.

the counter argument for the extremely expensive deployment for
(offline) chipcard based personal emergency medical information ... is
that the money would show significant better return-on-investment by
improving the online capabilities (however, this results in enormous
write-offs for investment in the distributed, offline paradigm).

similar scenario is applicable to the x.509 identity digital
certificates (from the early 90s) and was facing the ever increasing
amount of (general, not limited to medical) personal information that
would be carried in such certificates ... and by the mid-90s, numerous
institations realizing that this represented significant privacy and
liability issues. there was quite a bit of x-over with the offline
chipcard emergency medical scenario ... since the same/similar
chipcards presumed to contain x.509 identity digital certificates.

in some of the current identity card scenarios ... there are flavors
of attempting to recoup some of the past billions in chipcard
investments ... and resurgance of earlier efforts to have ever
increasing amounts of personal information involved.

this also strays into confusing identification and authentication.
for an online environment, a chipcard can be (difficult to
counterfeit) simple something you have authentication ... from
3-factor authenticationhttp://www.garlic.com/~lynn/subintegrity.html#3factor

while "identification" basically starts down the path (again) of
loading ever increasing amounts of personal information.

the other "old" scenario for such "offline" (large, expensive)
chipcard infrastructure (besides large amounts of personal emergency
medical information) was the "electronic" drivers license. The drivers
license chipcard would contain large amounts of personal (and identity
related) information (as opposed to much simpler "somthing you have"
authentication). The target was offline environment ... where
"appropriate" people could integrate the chipcard for all the
necessary information.

The things that the chipcard industry somewhat missed was that law
enforcement transition to online environment ... if they went to the
trouble of stopping the person, officers would do an online
interrogation ... which would have a superset of any information that
might be contained on the card. The issue again, was what scenario
could there possibly be where it was important enuf to take time-out
to do an (offline) check of information on the card ... and not bother
to do an online check for realtime information. The case then is made
that rather than investing huge amounts of information in beefing up
an offline paradigm ... there is much bigger return-on-investment in
putting the money in beefing up the online paradigm (with its much
higher quality real-time information).

from above
A bee dance-inspired communications system developed by Georgia Tech
helps Internet servers that would normally be devoted solely to one task
move between tasks as needed, reducing the chances that a Web site could
be overwhelmed with requests and lock out potential users and
customers. Compared with the way server banks are commonly run, the
honeybee method typically improves service by 4 percent to 25 percent in
tests based on real Internet traffic. The research was published in the
journal Bioinspiration and Biomimetics.

... snip ...

note that there is quite a bit of overlap between this and
virtualization for server consolidation. the avg. load for most servers
tend to be very low ... but with randomly occuring spikes. if you host
several such servers on single machine or cluster of machines ... then
randomly occuring useage spikes for any specific server can be averaged
out across all the commonly hosted servers.

from above:
The USMC, which has about 12,000 x86-based servers, is adopting an
enterprise-wide approach to virtualization via a deal with VMware. The
project includes a plan to reduce the USMC's current total of about 300
data centers to 30 facilities plus 100 "mobile platforms." A major goal
of the virtualization strategy is increasing system availability and
operational continuity.

... snip ...

also from above:
For instance, when IT workers want to take a server offline for
maintenance now, they have to do the work at night and issue a
half-dozen or so warning notices starting 30 days in advance.

this was virtual machine common/frequent refrain from the 60s and 70s
.... modulo some of my kernel work ... where i still needed to take down
the whole machine for some new kernel work on the real machine
... especially performance related work.

however getting ready for releasing the resource manager ... i had to
do significant amount of (performance-related) benchmarking as part of
validating/calibrating the resource manager. final sequence involved
2000 (largely automated) benchmarks that took three months elapsed time
to run
http://www.garlic.com/~lynn/submain.html#benchmark

other posts mentioning the resource manager ... it had a lot of dynamic
adaptive features that i had done as undergraduate in the 60s
... frequently referred to as the "fairshare" scheduler since the
default policy was fair share:
http://www.garlic.com/~lynn/subtopic.html#fairshare

Steve O'Hara-Smith <steveo@eircom.net> writes:
Access control specification and the fine details of interaction
protocols are all that remain. I'd be inclined to go with something like
RSA keyfobs for identification of authorised users and but the policy
requirements are not clear to me.

the RSA keyfobs are primarily the secure-id rolling time that rsa inc,
when secureid bought rsa ... and changed their name to rsa, inc (since
then EMC bought the resulting rsa, inc).

the upside with sercureid tokens is that they displayed a non-repeating
shared-secret ... which then could be entered with a standard keyboard.
the downside was that it is a chore to keep the changing values in sync
... especially if multiple jurisdictions and/or systems are involved
(aka this is not a public key implementation).

we looked at quite a bit of this when we were involved in co-authoring
x9.99 financial industry privacy standard. this including talking to
glba guys ... but also some of the gov. agency people behind crafting
hipaa (one guy who had done initial work on hipaa regulations back in
the 70s and some of the provisions still haven't been invoked) ...
aka financial comes under hipaa in areas like billing statements where
specific procedures may be listed.

various participants were also heavily involved in "disclosure" (and
information sharing) legislation ... especially regarding "opt-out"
vis-a-vis "opt-in" provisions. part of the overlap between the two was
use of electronic signatures for various kinds of electronic
authorization documents.

as mentioned before there were surveys regarding overall "privacy" and
the top two issues were 1) identity theft (resulting in various kinds
of personal financial fraud) and 2) denial of service (gov. agencies
and/or commercial entities basing decisions on information that they
shouldn't have access to).

one set of policies involve signatures (potentially electronic
signatures) by the individual, explicitly authorizing access. the
other set of policies are more along the lines of traditional security
access control where some institution has deamed that it has access
... and it is trying to control which institution members/employees
can actually get access (actually trying to prevent non-authorized
access as a whole category). the keyfob thing is mostly limited to
specific institution trying to do traditional security access control
... but runs into difficulty when it attempts to go cross-domain
.... for much the same reason that all shared-secret oriented
operations have difficulty with cross-domain deployments.

we actually looked at common (authentication/authorization)
infrastructure that could satisfy all the possible policies and
requirements (traditional security access control ... including
x-domain extended across multiple institutions ... as well as
individual explicitly authorizing access).

one of the big policy/paradigm problems was shifting traditional
security thinking (in the x9.99 privacy standard work) where CSOs were
doing traditional protecting the institution assets (from outside
access) to protecting the individual's assets (in some cases from the
institution). Myopic concentration on the similarities with
traditional institutional security failed to adequately take into
account the full implications of all the personal privacy issues.

... this was analogous to the work on x9.59 financial transaction
standard where the requirement was to preserve the integrity of the
financial infrastructure for ALL retail payments.

... where the intersection was possible use of electronic signatures for
opt-in/authorization scenarios for access to information (w/o it
institutions weren't allowed to access personal information
... potentially already in their possession)

one of the big paradigm disruption/discontinuities was trying to help
(institution) CSOs and security officers getting their heads around the
concept that they weren't just trying to prevent unauthorized/external
access to institutional assets ... but having to also prevent
institutional access to personal information (and/or only allow
institutional access to personal information when specific access has
been authorized by the individual).

from above:
Telecommuting is a win-win for employees and employers, resulting in
higher morale and job satisfaction and lower employee stress and
turnover. These were among the conclusions of psychologists who examined
20 years of research on flexible work arrangements.

Translation of IBM Basic Assembler to C?

Steve O'Hara-Smith <steveo@eircom.net> writes:
Indeed they are, OTOH records in the form of a folder full of loose
bits of paper are really scary as are partial records spread around several
incompatible systems with no data transfer mechanism or policy.

If a central store is created it should not be a government thing
(for one thing it should be international). I would go for something set up
by a group of medical authorities.

complained that (online) "central" operation ... for something like
financial transactions ... met that it was obviously a single location
for the whole world. however, the "central" financial transaction model
is lots of "centralized" locations ... much more like the web "central"
location ... but dating back to the 60s (ubiquitous networking).

even the "web model" (even if financial use of the model dates back to
the 60s) single location for any specific data ... can have multiple
replicated sites for availability. for some drift ... we had coined the
term disaster survivability and geographic survivability) (to
differentiate from disaster recovery)
http://www.garlic.com/~lynn/submain.html#available

Steve O'Hara-Smith <steveo@eircom.net> writes:
Doesn't keeping them in sync just depend on decent timekeeping ?
Keeping good time on the authentication servers is easy (ntp), eventually
the keyfob will drift too far out (but by then the battery you can't get at
to change is probably dead anyway so you need a new keyfob).

there is a secret involved that is permuted in non-obvious way by time.
the issue in handling multiple domains (aka multiple different servers
across multiple different domains or authorities) it means keeping all
the servers and keybobs in sync ... as well as the secrets across all
the servers in all the domains. it doesn't scale well.

the other aspect is what we characterize as the "institutional-centric"
model vis-a-vis person-centric model. we've claimed that if the use of
"institutional-centric" tokens (or other kinds of secure something you
have authentication devices) as substitution for passwords ... becomes
widespread ... then you have one physical token in place of every
password ... and becomes as equally unwieldy (as passwords) to manage.

The other analogy is the anti-privacy/DRM efforts in the 80s with
applications loaded on harddisks ... and a "special" application floppy
disk (unique for every application) that had to be loaded into the
floppy drive (in order for the application to run). Again, an
institutional-centric solution that becomes unwiedly for more than a few
special cases.

for used as something you have authentication hardware token.
Somewhat facetiously I would say that I was taking a $500 milspec token
and aggresively cost reducing by 2-3 orders of magnitude while improving
the security.

the other aspect, was there was a lot of work done looking at the whole
end-to-end process and what would be needed to enable the whole paradigm
to be a person-centric operation (and acceptable to all the existing
institutions). The claim was that the overall infrastructure savings
might be 4-5 orders of magnitude (say 100,000 times) ... aka 2-3 orders
of magnitude cost in a single token and couple order magnitude reduction
in total number of tokens (under assumption that hardware authentication
tokens ever caught on).

Translation of IBM Basic Assembler to C?

Howard S Shubs <howard@shubs.net> writes:
Multiple implants, preferably including the skull, one in each limb, the
torso, the genitalia, the ears... So you can copy someone's identity by
cutting off an ear or a limb, yes. The one in the skull might be
difficult to steal, but it's also difficult to implant. The one in the

the issue in the original scenario was the probability (and associated
return on investment) was that the chip would be useful.

the scenario justifying the chip

1) accident/emergency

2) emergency medical personal and/or first responder
on the scene

3) emgergency personal don't have online access to medical
personal (watching TV, they seem to be very limited
in what they are allowed to do, much of it requires
electronic communication and supervision by remote
doctor)

4) emgergency personal do have time, resources, and capability
to access information in a persons chip and are capable
of effectively utilizing the available information
(when they otherwise aren't in electronic communication
with real doctor ... which also limits what they are
allowed to do)

the claim is that meeting all the conditions, in order to justify the
chip, is so close to zero ... that the money is much better spent
improving other aspects of the emergency response infrastructure.

the scenario with a chip being destroyed and/or otherwise unavailable
... and still meeting all the other criteria (justifying multiple chips)
further pushes the hypothetical scenarios into the fantasy category

a simple counter argument would be that in any dire emergency with
injuries that would likely destroy a single chip ... would result in
emergency personal being so preoccupied that taking time out to
find/access detailed medical records in such any other chip ... might be
considered a lot of electronic noise.

for some topic drift ... while boyd was responsible for f16 and lots of
the characteristics of f15 and f18 ... he commented that the early f16
headsup displays were counter productive ... that they provided a lot of
scrolling numbers that hindered (rather than helped) a pilots ability to
operate in a critical encounter (aka distracting noise)

Translation of IBM Basic Assembler to C?

Steve O'Hara-Smith <steveo@eircom.net> writes:
If a central store is created it should not be a government thing
(for one thing it should be international). I would go for something set up
by a group of medical authorities.

Steve O'Hara-Smith <steveo@eircom.net> writes:
In short I don't see insurmountable problems with using them for
authentication and they are easily combined with something you know for
two factor authentication simply by using the keyfob to control access to a
password login.

I think the GRID guys claimed that they were running into problems with
the same individual using the same keyfob across 6-12 different
independent institutions. for instance can you show up with your own
keyfob at each institution and have it registered? if you register it at
one institution (say local ISP that may have staff from local highschool
students) would it possibly compromise using the same keyfob
registration at your employer or online banking?

as administrative platforms for handling both single domain, multiple
installation as well as x-domain (aka multiple independent institutional
authentication operation). Part of the issue is the something you have
token authentication mechanism ... but the other issue is the
administrative infrastructure to manage the authentication policies and
operations as well as the corresponding authorization policies and
operations (what is the authorization implication that corresponds to
some authentication).

Anne & Lynn Wheeler <lynn@garlic.com> writes:
somewhat unrelated ... but a business news channel had interview with
oil investment specialist ... who made some statement that oil
exploration/development investment was underfunded in 2005 by 1/3rd
... likely leading to 1m barrel/day production shortfall in 2010-2011
timeframe (although there are significant uncertainties that could
affect that shortfall prediction), growing to possibly 4m barrel/day
production shortfall by 2013-2014.

the interviewer asked what are the possible reasons for the shortfall in
investments. the "specialist" explained that one reason is that 1/2 of
the production project specialists will reach retirement age over the
next three years and there wasn't enough talent to undertake additional
projects that typically take 7-8yrs.

from above:
You've heard the reasons for high oil prices: instability in the Middle
East, booming demand in China and India, the sagging dollar. Now add
another one to the list: Engineers. The world doesn't have enough
of them.

from above:
A move by Costco, a top-25 Internet merchant, to take PIN debit on its
site would lend considerable credibility to the idea of taking PIN debit
online. This is a notion many EFT officials have historically shied away
from, citing concerns about security and about a possible threat to the
interchange income EFT network members earn from signature debit. PIN
debit interchange rates are typically lower than those for signature
debit.

Translation of IBM Basic Assembler to C?

Steve O'Hara-Smith <steveo@eircom.net> writes:
I don't see how unless you also have the secret for it. In every
use I have seen the institution issuing the keyfob keeps the secret
carefully ... well secret. So the keyfob can only be used with that
institution.

a frequent and common reoccuring problem with shared-secret paradigm
... it is almost as if every institution/environment really
believes that an individual only has the relationship with that
institution as the one and only relationship requiring authentication
and there were no others
http://www.garlic.com/~lynn/subintegrity.html#secrets

aka if tokens were to ever catch on ... rather than carrying around a
piece of paper (or electronic memo) with 100+ shared-secrets ... you
would always carry around a backpack with 100+ (shared-secret paradigm)
keyfobs ... you might need to have the backpack organized into separate
categories and maybe color coding ... so to help facilitate being able
to help identify the correct, needed keyfob at any specific instant.
discussion of person-centric vis-a-vis institutional-centric
authentication paradigms
http://www.garlic.com/~lynn/2007s.html#59 Translation of IBM Basic Assembler to C?

The new urgency to fix online privacy

"Rostyslaw J. Lewyckyj" <urjlew@bellsouth.net> writes:
Curious that the revelation/warning is not considered very noteworthy
by the SANS security organization, and no promises of fixes or even
acknowledgment from MS.

from above:
As recently as last Friday, Microsoft hedged in answering questions
about whether XP and Vista could be attacked in the same way, saying
only that later versions of Windows "contain various changes and
enhancements to the random number generator." Yesterday, however,
Microsoft responded to further questions and acknowledged that Windows
XP is vulnerable to the complex attack that Pinkas, Gutterman and
Dorrendorf laid out in their paper, which was published earlier this
month.

CBFalconer <cbfalconer@yahoo.com> writes:
It probably is, or at least nearly so. Comparison with the Euro or
the Canadian dollar cuts the price down to about $60 today, which
allows for considerable Chinese competition, oilco profiteering,
etc. The basic problem is the dollar deflation, driven by the

some part is huge balance of trade deficit that has been going on for
decades with most of the countries ... which means they have enormous
surplus of dollars. some of the enormous surplus of foreign dollars are
being invested back into the country, helping dampen interest rates
(since there is significant source of funds from these dollar surpluses)

one of the things supporting the dollar in the past was size of the
associated economy vis-a-vis various alternatives. EU created comparable
sized economy helping strengthen the euro ... euro not only rises
vis-a-vis the dollar ... but also creates a currency competitor for the
dollar (weakening the dollar as primary world currency choice). Similar
forces going on in the far east.

The EU and more recent the rise of economies in asia has created
currencies increasing in value (against the dollar) ... but also backed
by economy on par with the US. I believe there was previous post
mentioning that the dollar having been dominant currency of choice in
the world which could possibly account for 1/3rd of its perceived value
(aka helping prop up dollar value against all world currencies ... aka
some cases, specific currencies can rise against the dollar ... as the
case of the yen since the early 70s ... and/or the dollar can "fall"
against several major currencies).