hancock4 writes:
Secondly, I never understood why competitors never
could duplicate the level of customer support that IBM
provided, esp when some of them were large and successful
companies.

i was told a story about one of the seven dwarfs (I think rca) giving
testimony at the fed. gov. trial involving ibm. supposedly they
testified that in the late 50s every computer company realized that
the single most important thing to be successful in the business was to
have a compatible computer line (businesses were investing large
amounts in software applications ... but it was also a period of
significant corporate growth ... and they didn't need to scrape all
their software everything they upgraded a machine). the observation
was that every other computer company failed at this ... except ibm.
that local plant/product managers were always optimizing the machine
architecture for the technology their specific product was using. IBM
supposedly had the only corporate leadership that forced all the
different plant/product managers to toe the (360) line.

a lot of the 360s were microcoded machines ... regardless of the
native hardware engine characteristics ... the microcode layer hid all
of that and provided a uniform 360 architecture to software. the
microcode emulation could provide a ten-to-one performance degradation
between the 360 delivered thruput and the native hardware engine
thruput.

one might claim that the care that went into addressing solutions for
customer requirements ... went far beyond just having onsite
handholding. another characteristic would be realizing that software
development was the primary bottleneck (in the period) ... and
hardware upgrades for growing corporations could represent a
significantly bigger macro-problem (if software conversions were
required) than some of the more day-to-day micro-issues.

"Jon A. Solworth" writes:
On the contrary, the systems in place at Bering did not implement
standard and prudent seperation of duty. If they had done so,
Lessing would have been almost certainly unable to commit a fraud
of this magnitude.

The purpose of security is to prevent against attackers, even when
that attacker is an insider.

i've asserted in the past that in the 80s there was starting to be a
lot of work on insider threats and things like collusion ... aka when
you have countermeasures against single insider threats ... then
start dealing with combination of insiders and collusion threats as
attacks on the single point security processes.

introduction of internet has tended to obfuscate the insider issues
... not necessarily reducing insiders as the major points of fraud
... but the possibility of outsider attacks can obfuscate source of
the exploits. hopefully at some point ... the environment gets back to
the state-of-the-art of the early 80s ... outsiders being pretty well
excluded as point of attack ... and having to worry about collusion
among insiders ... frequently lots of compensating processes.

strong authentication can help wall out outsiders ... but also can act
as possible deterrent (along with effective audit log) for insiders,
increasing the probability that they could be successfully prosecuted
in cases of fraud (as well as possibly catching various acts early so
that in some instances the activities can be reversed or abrogated
.... i'm partial to the term abrogated every since i ran across it in
the 370 architecture redbook).

use of logging for integrity has been around for some time in numerous
guises ... including database acid properties ... however this last
rsa conference had some track(?) that was sort of logging as the new
technology for security integrity.

in the financial world ... risk management may include preventing
attacks ... but it may also be about catching and reversing the
effects of attacks.

Ancient history

Paul Rubin <http://phr.cx@NOSPAM.invalid> writes:
A browser does much more than render HTML. It has a fancy GUI instead
of a crude command line interface; it gets and processes data from
multiple network sources in parallel; it renders images; it interprets
javascript, and it sprouts improvements all the time in response to
technical developments and user requests. Yes, you could maybe do a
more careful job writing something minimal and strictly
standard-conformant; but we're talking about full-featured, responsive
browsers of the type that actual users want to actually use.

however, we had no approval/veto authority about what went on in the
client implementation and much of the client/server interactions ...
primarily just limited to the server and the payment gateway
interactions (although i did give some presentations on business
critical dataprocessing requirements for network implementation that
had some number of the client implementors present).

I've had a couple conversations with the cve people about some
variability in the descriptions ... sometimes describing cause,
sometimes describing results, sometimes giving both. they claimed that
they are now trying to provide more uniform structure in the
descriptions.

from analysis last spring ... mostly simple count of CVE entries with
specific word or word-pairs.

....
1246 mentioned remote attack or attacker
570 mentioned denial of service
520 mentioned buffer overflow
105 of the buffer overflow were also denial of service
76 of the buffer overflow were also gain root

hancock4 writes:
I wonder if compatibility was a big issue in the late 1950s.

IBM didn't come up with until the early 1960s Spread conference.
I believe part of the motivation was internal--IBM realized it
had to support a whole bunch of diverse platforms, including
systems software, applications software, and peripherals for
each platform. I don't think the other companies had as many
models to be concerned about compatibility.

Indeed, even in IBM there was great dissent over compatibility.
Haastra wanted to put out a super-1401 using SLT chips. Others
were afraid of losing existing customers who wanted more
while S/360 was developed and implemented. (Honeywell was
pushing a "liberator" converter for 1401 customers.)

i think the testimony was that they had realized it by the late 50s
... and then they were supposed to do something about. whoever was
testifying claimed that their company had tried ... but that corporate
hdqtrs couldn't get the plant/product line executives to toe the line
... while ibm corporate hdqtrs managed to pull it off.

there is some conjecture that if you are the only company in the
market having done the single most important thing correctly ... it
would be possible to get some number of the other details wrong
... and still prevail.

i've commented before that the 360/67 dat & time-sharing was actually
more successful than possibly many time-sharing systems that might be
better known in some of the literature ... number of systems and
number of users ... but that the dominance of the corporation's batch
customers vastly overshadowed the dat & time-sharing work.
http://www.garlic.com/~lynn/submain.html#timeshare

it was a period of rapid growth and getting payroll out and processing
financial transactions, checks, etc .. on the batch systems had a much
bigger bank for the buck than a lot of the time-sharing
stuff. however, eventually reached some saturation point on all the
really important work that needs to be done ... and then along comes
more entry level computing that can be used for the less important
computing.

while the corporations batch market was much larger than the
corporations time-sharing market ... that time-sharing market was
still larger than some number of the competitor's time-sharing market
(it just that the magnitude of the batch market dwarfed both for quite
some time).

note however, the internal corporate infrastructure was one of the
major world-wide users of its own time-sharing product ... and the
associated networking infrastructure (built on and in conjunction with
that time-sharing product) was larger than the whole arpanet/internet
from just about the beginning until around the summer of '85.
http://www.garlic.com/~lynn/subnetwork.html#internalnet

I've asserted that one of the reasons that the internal network was
larger than the arpanet/internet from just about the start ... was
that every node in the internal network had a flavor of gateway
functionality from the start ... which the arpanet/internet didn't get
until the great switch-over on 1/1/83. At the time of the switchover
arpanet/internet had approx 250 nodes
http://www.garlic.com/~lynn/subnetwork.html#internet

for a time even bitnet/earn ... educational network using the
internal network technology ... but distinct from the internal
network (and not including in size comparisons of the internal
network to internet size)
http://www.garlic.com/~lynn/subnetwork.html#bitnet

there were some number of commercial time-sharing offerings built on
the technology ... but possibly dwarfing all of them was the internal
HONE system ... which was online support for world-wide field, sales,
and marketing people
http://www.garlic.com/~lynn/subtopic.html#hone

at the time of the consolidation of all the US HONE datacenters to
cal. in the late 70s ... it was starting to push 40k userids ... and
the HONE offering was replicated in numerous countries around the
world.

he gave a talk at mit in the early 70s and was asked about raising
funding for the company ... he said that they figured that even if ibm
were to immediately walk away from 360 (possibly veiled reference to
FS ... which was going to be more radically different from 360 than
360 had been from the prior generations). that customers had already
spent at least $100b on software application development ... and that
would keep amdahl in the 360/370 clone market at least thru the end of
the century (aka even if ibm walked away from 360/370 at that time).

so supposedly amdahl clones came about because of FS project ...
and FS project supposedly came about because of 360 controller
clones ... aka
http://www.garlic.com/~lynn/2000f.html#16 FS - IBM Future System

speedy writes:
Are there any lite versions of these programs around? Particularly if
it's just basic functionality required?

how 'bout ...
389181 Oct 30 1994 nscape09.zip

... and a blast from the past ...
ATLANTA-Nov. 21, 1994-(BUSINESS WIRE)-Making it easier for businesses
and consumers to use and shop the Internet, MCI Monday announced
"internetMCI," a portfolio of services featuring such components as a
new secure electronic shopping mall, a user-friendly software package
for easy Internet access and high-speed network connections to the
Internet.

"MCI is making the Internet as easy to use, as accessible and as
critical to businesses as today's global phone network," said Timothy
F. Price, president of MCI's Business Markets. "With internetMCI,
businesses of all sizes will now be able to not only display but also
directly sell their goods and services over the Internet. For the 25
million people on the Internet, shopping the Internet will become
simple and secure."

The new MCI offering represents the most comprehensive set of Internet
services in the industry, according to Price. "There are other
companies that offer Internet-related services, but no one else offers
the full range of applications software, access, storefronts and
consulting services in one package," he said. "We now have everything
companies need to promote commerce over the net. This is what
American businesses have been waiting for."

Users of internetMCI will be able to browse and shop in MCI's new
Internet shopping mall called marketplaceMCI. MCI said it is working
with a number of America's most well-known retailers and information
providers to design storefronts for them when marketplaceMCI opens
early next year. MCI has already begun beta testing on-line
electronic shopping with about 40,000 employees.

A key component of internetMCI is a software system developed by
Netscape Communications (formerly Mosaic Communications). Using
encryption technology from RSA Data Security, the system integrates a
number of component into a secure environment.

Included are the Netscape Navigator server and database software for
storefront management and secure credit-card clearing. Also included
is a digital signature system operated by MCI to certify and identify
valid merchants for internetMCI.

The complete system allows consumers to shop and make secure
transactions directly over the Internet without the fear of having
their credit card number or other sensitive information stolen by
electronic eavesdroppers.

The software package also has point-and-click technology that lets
consumer and business users easily and quickly browse the Internet's
World Wide Web over ordinary phone lines.

"Transaction security is the last major hurdle to making the Internet
a viable marketing and distribution channel for businesses," said
Price. "By the year 2000, MCI expects commerce on the Internet will
exceed $2 billion and be common as catalog shopping is today."

Through an agreement with FTP Software Inc., MCI will provide the
Internet Protocol software along with the Netscape software in one
easy-to-install package. FTP Software, the leading independent
supplier of TCP/IP-based network software, will also provide MCI with
integration and support of its software.

MCI will offer internetMCI software to customers at prices starting as
low as $49.95. The internetMCI software will also be included at no
additional charge to customers of networkMCI BUSINESS, an integrated
information and communications software package.

MCI will market storefronts to retailers and service companies that
want to promote and sell their goods to the estimated 25 to 30 million
people who can now access the Internet worldwide. MCI will offer
businesses consulting in the design, implementation and management of
their storefronts, in addition to the added value of MCI's ongoing
promotion and marketing of the new mall services.

MCI To Provide High-Speed Connections to Internet

MCI's internetMCI Access Services will be fully integrated with its
existing business long distance services. Internet access will be
available in a wide range of methods from switched local and 800
access and dedicated access to more advanced switched data services
such as ISDN, frame relay and, in the future, SMDS and ATM.

A full Internet service provider, MCI will offer dedicated access to
the Internet from nearly 400 locations in the United States.

Another component of internetMCI is the company's new high-speed
connections to the Internet through the new MCI Internet Network.
This network is one of the highest capacity, most widely-deployed
commercial Internet backbones in the world, providing businesses with
direct and reliable connections to the Internet.

Compared to most conventional Internet access networks, MCI Internet
Network offers greater transmission speed and capacity because the
network operates at 45 megabits per second. Next year, MCI will
increase the speed of the MCI Internet Network to 155 megabits per
second, capable of transmitting 10,000 pages in less than a second or
a 90-minute movie in just three minutes.

MCI Selected as Primary Internet Carrier

Following its selection by some of the major regional Internet provide
in the United States, MCI will become one of the world's largest
carriers of Internet traffic - carrying more than 40 percent of all
the U.S. Internet traffic.

The regional Internet providers BARRnet; CICnet; CSUnet; JVNCnet; Los
Nettos; Meri MICHnet; MIDNet; NEARnet; NorthWestNet; SURAnet; and
Sesquinet have been a part of the Internet since its inception and
have been a major force in the drive towards ubiquitous network
connectivity, which has helped make the Internet so popular.

MCI's Internet initiatives are being directed by Vinton G. Cerf, MCI
senior vice president for data architecture, and an industry-
recognized "Father of the Internet," along with a team of world- class
experts on the Internet.

"The Internet is a global resource of unmeasured value and potential
to educators, governments, businesses and consumers," said Cerf. "MCI
is preserving and enhancing the intelligence and economic power of the
Internet while making it easier and more accessible than ever before."

MCI Showcases Interactive Multimedia Message on the Internet

Earlier this month, MCI began an innovative marketing campaign on the
Internet that plays off the company's successful Gramercy Press ads
for networkMCI BUSINESS. Users of the Internet can, with a click of
the mouse, learn more about the characters in the Gramercy Press
commercials, even hear their voices or see video images of them.

The campaign, which already has been viewed by more than 100,000
Internet users, has an interactive component that allows Internet
users to actually submit their art, poetry or short stories for
viewing on the Internet. MCI selects pieces and publishes them on the
"net," where they can be viewed by the millions of users of the
Internet worldwide.

Internet users can travel to Gramercy Press on their own (address:
http://www.mci.com/gramercy/intro.html) or via "Hotwired," the new
on-line spinoff of "Wired" magazine. MCI is a sponsor of the
magazine's "Flux" section which offers news about Internet movers and
shakers. Hotwired members can reach Gramercy Press at
http://www.hotwired.com (click-on "signal" zone).

"The Internet is a marketer's dream come to life," added Price. "It's
full-color, full-motion and full of potential. MCI not only expects
to be on the leading edge of marketing its own services on the
Internet, but also in the forefront of helping our customers tap the
marketing power of the Internet."

For more information on internetMCI, call 800/779-0949.

With 1993 revenue of nearly $12 billion, MCI Communications Corp. is
one of the world's largest communications companies. Headquartered in
Washington, MCI has more than 65 offices in 58 countries and places.
The company's Atlanta-based MCI Business Markets provides a wide range
of communications and information services to America's businesses,
including networkMCI BUSINESS, long distance voice, data and video
services, and consulting and outsourcing services.

IBM/Watson autobiography--thoughts on?

Charles Richmond writes:
Leasing had the advantage that the equipment was *not* taxed, like
the capital equipment that you *owned*. And the lease money was a
deductable business expense.

i have some recollection of being told that learson was responsible
for converting the mainframe leasing install base to sales ... it got
him a really great qtr (year?) but it had some later downside effect
on ongoing revenue.

which is a pdf file ... google html rendered version
http://64.233.179.104/search?q=cache:NgIil04v3OEJ:www.kiet.edu/~docs/ejournals/Annals%2520of%2520the%2520history%2520of%2520computing/a1064.pdf+ibm+leasing+mainframe+learson&hl=en

upcoming share meeting is in boston (8/21-8/26, 2005) ... meeting
that i gave the above presentation ... in Atlantic City
... fall '68.

three people from the science center had come out and installed cp67
at the univ. the last week of jan. '68 ... and then i got to go to the
spring share meeting in houston where cp67 was officially announced.

3705

howard@ibm-main.lst (Howard Brazee) writes:
Neat. All I have are a few crashed disk packs.

My ideal nerdy gift would be the shell of an old Cray, with the
bench around the circular computer.

some of the people at almaden research presented me with front panel
of HYPERchannel adapter box ... that had been engraved with some stuff
... including a stylized image of the almaden research bldg.
http://www.garlic.com/~lynn/subnetwork.html#hsdt

cray and thornton had worked together at cdc ... thornton left to form
network systems corp ... and produce HYPERchannel for heterogenous
high-speed interconnect.

More on garbage

"Jon A. Solworth" writes:
Availability is a liveness issue, the other two are safety issues.
A system which does *nothing* is *always* safe, and hence by definition
the safety issues (confidentiality and integrity) are satisfied.

A system which does absolutely nothing cannot be secure. A system must
have some function, and security is there as one of the ways of
protecting that function from advesaries.

sporadically over the last 30 plus years ... i frequently ran into the
comment that the purpose of security is to make systems unusable (if
you can't accomplish anything, then hopefully neither can the
attackers) ... and frequently security and human factors can be
diametrically opposed.

a simple scenario is 3-factor authentication
• something you have• something you know• something you are

where something you know is a shared-secret and the security rules
require that a unique shared-secret is required for every different,
unique security domain .... leading to current situation involving
requirement that people memorize scores of unique shared-secrets that
are never written/recorded.

somewhat the opposite is using trivial personal information (and
supposedly easily remembered) for something you know authentication
shared-secret ... with a large number of different security domains
selecting secrets from a small common pool of personal information
(ss#, birth date, mother's maiden name, etc).

pg_nh@0502.exp.sabi.UK (Peter Grandi) writes:
It is so sad... I remember such tools well, and that very very few
people even know that they ever existed or would care is part of my
usual refrain ''the lost art of memory management'' and the general
''thirty years of valuable research down the drain of oblivion''.

between the work on systems and the study of systems. some of this
sort of started the evolution of performance tuning and management
into things like capacity planning.

there was huge amount of benchmarking and statistic gathering
... perparing for the release of the resource manager there were a
final phase of 2000 benchmarks that took 3 months elapsed time to run.
there was configuration variables and workload variables. the first
1000 or so benchmarks were done with preselected values using past
knowledge. for the final 1000 ... there was an apl analytical model
... which had all the prior runs had been input ... and it was
modified to select the benchmarking parameters (and look for things
like anomolous operating points).

vs/repack from the science center was example tracing ... that then
used some cluster analysis and human observation (like for hot spots)
... reduce both working set size as well as page fault rate
characteristics.

a couple years ago, I ran into a descendent of the performance
predictor. it had greatly evolved over the years ... and then somebody
had obtained rights to it and ran it thru a apl->c translator and they
were using it in consulting business, targeting mostly large complex
mainframe operations.

anyway a customer ... had an extremely large business critical
application that was tending to fully utilize a very large number of
mainframe processors. Extensive I-address sampling had been used to
identify hot spots for review and recoding. The modeling tool had also
been used to further identify possible bottlenecks that could benefit
from redesign and/or rework.

it turns out that in the early work at cambridge, the different
methodologies tended to turn up different kinds of performance areas
of extreme interest. cambridge had heavily instrumented cp67 and years
of 7x24 activity data. it was found that multiple regression analysis
of the activity counters could turn up things of interest that weren't
identified by either i-address sampling (for hotspots) or modeling
(sometimes driven by the same activity count information used in
multiple regression analysis).

Anyway, multiple regression analysis of a large number of activity
counters turned up something that needed rework (and wasn't
identifiable by the other methodologies) and resulted in something
like 15 percent overall improvement (and we are talking a very large
number of mainframe processors here running this large business
critical application).

... and also to pick up this new thing they had gotten called
adventure. since they were a commercial time-sharing service ... when
their executives found out that there were game on the system ... they
wanted it immediately removed ... but then changed their mind when
they were told how much revenue adventure was generating.

i had also mentioned it to a couple people on the internal network
... and couple days later a copy arrived over the internal network
from somebody in the UK (who had picked it up at a local univ).

johnf@panix.com (John Francis) writes:
Well, it's a feature of the new premium digital cable services.
Digital/optical cable services have more than enough bandwidth to
provide a TV-quality streaming video feed to every consumer; they
already offer cable modem service with more throughput than that.

See, for example, Comcast's "ON DEMAND" service. They tout this as
having 3000 programs you can choose from. That's probably a bit of an
exaggeration, but there seem to be several hundred shows available at
any time - usually for a one or two month period. That's more than
could be stored on most DVRs, so push technology isn't adequate.

Conspicuous by their absence, at present, are the major broadcast
networks.

there were a number of efforts formed in the early 90s that thot
video-on-demand was the next big thing. slightly related recent
post (late '94):
http://www.garlic.com/~lynn/2005k.html#7 Firefox Lite, Mozilla Lite, Thunderbird Lite -- where to find

i knew people at the time, working on databases infrastructures that
supposedly was targeted at deliverying movies to the home consumer
market.

from spring of 94:
ADSL/ATM TO YIELD RESIDENTIAL BROADBAND NETWORKS/VIDEO ON DEMAND
The combination of two key elements, ATM switches and ADSL
technology, is the major step forward that will facilitate
commercially viable Residential Broadband Networks.

haynes@alumni.uark.edu (Jim Haynes) writes:
I remember from the trade literature at the time a different reason
for the Amdahl clones. Gene Amdahl always was a fan of the
high-performance single processor machines. As an IBM insider he
knew there was a pricing formula for the various 360 models which
was to maximize profit across the entire line. By his calculation
this resulted in the top of the line machines being overpriced in
relation to cost of manufacture. He wanted the price of the
high-end machines reduced to increase sales and allow him to develop
even higher performance models. When top management would not bend
to his desires he decided to start a competing company, taking
advantage of the price umbrella that IBM had practically guaranteed
to him. --

ibm would frequently get the science and engineering of manufacturing
production extremely optimized (lots of studies of the manufacturing
process, yields, quality, optimizations, etc) ... so the truth might
just be the opposite; processors at the knee of the technology curve
were the most amenable mass production technology. the high end tended
to have much higher upfront R&E costs (pushing the technology) and
techniques frequently were much less adaptable to really high volume
manufacturing techniques.

The high-end tended to have lower volumes that the mid-range (which
almost by definition tended to be at the knee of the price/performance
curve) ... and so it was much more difficult to recover the larger
upfront R&E costs and/or justify upfront costs of developing
extremely high volume manufacturing techniques.

this gets more complicated later in large single chip VLSI designs
... where the complexity and performance of large chips can go up
... but if they manage to capture sufficient market volume ... it can
justify huge upfront R&E (both chip design and manufacturing
efficiencies) ... and the market volume then can actually result in
lower per item price.

the clone market was somewhat different ... he was coming into a
market that had large install base of MVS & virtual memory at the
high-end. in that period, there was a joke that an avg. MVS shop
required an avg. 20 IBMers as part of the care & feeding.

while unbundling had started separate pricing for application and
services ... kernel software was nominally free (modulo the number of
vendor staff needed to keep in running well). amdahl first big uptake
was in the technical MTS market.

there were two non-strategic virtual memory systems developed for
360/67 ... one was cp67 at the cambridge science center and the other
was MTS at UofMich. MTS was ported to 370 virtual memory and was
installed at some number of univ. Going after the MTS accounts, Amdahl
didn't have to fight customer problems with the large vendor MVS staff
not being around any longer. cp67 had morphed into vm370 ... for
virtual memory 370s ... and was offered by IBM ... there were large
number of places in marketing where it was viewed as non-strategic and
customers provided the majority of their own support ... w/o a lot of
vendor hand-holding.

the penetration of amdahl into "true blue" (commercial) accounts was
yet to happen when i was getting ready to release the resource
manager. one of ibm's premier, extremely large, true blue accounts was
considering an amdahl order. This sort of prompted the whole
transition to also charging for kernel software and the resource
manager and I got picked to be the guinea pig.

note ... this customer had so many real true-blue MVS systems in the
datacenter (in addition to vm) ... I don't think that the customer
figured that if one of the machines was a different color ... it would
drastically reduce the number of vendor people helping with the care
and feeding of MVS. i got to be pretty good friends with customer
people at the account ... i was being encouraged to drop by and talk
to them as frequently as possible, i think somebody was hoping that if
the customer got to really like me, they might cancel the amdahl order
(staving off amdahl being able to break out of the fringe techy market
into the real true-blue commerical world).

IBM/Watson autobiography--thoughts on?

being on the right part of the price/performance curve also started to
drive the cluster products .... hoping to significantly lower the
upfront R&E for the bang for the buck ... which can snowball; greater
volume can mean that you can do soemwhat larger upfront R&E
amortized over larger number of units.

Chris Gray writes:
Now, doing that may sound like a nice "pattern", but really it was
just me thinking "gee, I can make this cool new editor without
having to write all the hard buffer and file management stuff - just
let 'ed' do it". I think this was in the early-to-mid 1970's, way
before "programming with patterns" was invented.

undergraduate in the 60s ... after getting cp67/cms at the university
... there was a cms fortran graphics subroutine library for the 2250
vector graphics display (done by lincoln labs) ... i played around
interfacing the graphics library as front end to cms edit command.

the univerisity had a 2250m1 ... which was direct channel
attach to the 360 (no 1130).

cambridge science center had a 2250m4 ... somebody ported spacewars to
it (aka the 1130). two players used the 2250 key board split in left &
right halves ... with various keys mapped to movement & firing
functions.

daw@taverner.cs.berkeley.edu (David Wagner) writes:
My real motivation was to get people to stop thinking of a
one-size-fits-all view of security properties, and to recognize that
the set of security properties needed for each application is
application-dependent: there is no one answer that will fit all
systems. The one-size-fits-all view leads to fallacies, such as
thinking that anything that turns an integrity problem into an
availability problem is useless (it isn't necessarily useless;
whether it is useful or not depends on the application's security
requirements). Trying to say things like "for this broad class of
systems, availability is always critical" (an exaggeration for
effect) strikes me as heading towards the border of dangerous
thinking; even if it hasn't yet crossed that border, it might be
more productive to focus on specific applications and understand
their security requirements individually.

to some extent credit card limit is set proportional to risk
assesement.

some gov. stuff wants 30 years confidentiality ... and some commercial
stuff has talked about 50 years confidentiality. lot of that stuff may
have very low availability requirements ... say a lot less than the
1-800 system wanted or a 911 system ... around five-nines ... less
than 5 minutes (outage) per year outage.

we talked to people doing 1-800 system. they had an implementation
that was claimed to be a hundred percent up when it was up ... but
periodically needed to be taken down for maint ... which could be
several hrs. they could blow a century worth of downtime (based on 5
minutes per year outage) in a single maintenance session.

can live with standard DES ... $100 or less transaction that might
have a subsecond lifetime. the risk (possibly $100) if the attacker
can brute force the DES key in less than the lifetime of the
transaction (possibly second or less). and there isn't necessarily a
confidentiality requirement ... purely an integrity requirement. The
issue is what is the probability that an attacker is going to try and
spend a couple million to brute force a derived DES key in less than
second for a $100 return ... and how many times might they try it.

the capital reserve amount (as security) set aside is related to the
calculated risk. one of the battles in the new basel-ii requirements
was whether or not to keep the *new* qualitative data section ...
http://www.bis.org/publ/bcbsca.htm#pgtop

when risk adjusted capital requirements up until then had been based
primarily on quantitative numbers. the qualitative data stuff
disappeared from basel-ii .. but quite a bit of what had been in the
qualitative data section now appears in sox.

can have different requirements for the individual components in
different environments ... aka short-lived transactions might have
little or no confidentiality requirement ... but higher integrity
requirements ... especially if the transaction carries very little
personally identifiable information (PII) ... slightly related recent
post
http://www.garlic.com/~lynn/aadsm19.htm#35

there was some reference recently to digital certificate based
infrastructures focusing heavily on whether 1024 bit or 2048 bit key
was needed ... and it was mostly meaningless; that the attackers are
finding it easier to exploit other parts of the infrastructure than
brute force attacks on the keys (various institutions worried about
30-50 year confidential lifetimes might be more worried about it).

i've written some about most infrastructures out there should be more
worried about the overall integrity of the authentication process than
possibly the difference between 1024 and 2048 bit keys used for
authentication purposes (again this key length distinction may make a
whole lot more difference to institutions worried about 50 year
confidentiality ... than institutions worried about much shorter
period authentication operations) ... or generalizing the security
proportional to risk phrase to parameterised risk management.
Parameterised risk management could make authorization
decisions (analogous to the simple credit card limit) dynamically on a
transaction by transaction basis given a broad range of risk/threat
factors.

Rather than creating static set of security requirements ... allow the
various components of security to be relatively fluid ... and then
make dynamic decisions about approval or non-approval based on the
specific security component levels for a specific operation.

Ancient history

nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
Which is orthogonal and irrelevant to my point. If nobody knows where
the boundary is between overflowing and access to an extended area
(i.e. permitted use), then it is impossible to insert such checking
correctly. And that is the case.

we were working with one of the reed-solomon company on FEC for
high-speed communication. they had done a lot of the work for the
cdrom iso standard for encoding ... and stuff like interleaving (lots
of faults are bursts or scratches ... so interleaving can gain some
improvement). they also worked on various parts of digital
broadcasting technology. claim was that not only did appropriately
encoded digital transmission reduce the bandwidth requiremetns
(vis-a-vis analog) ... but that the encoding significantly improved
the reception quality ... equivalent noise injection in analog &
encoded digital ... could result in total analog white-out ... while
digital still delivered relatively good quality picture.

the industry meetings in the 90 timeframe with the dept of commerce
had a lot of country competitiveness overtones. supposedly if hdtv
technology went the wrong way ... that foreign industry could capture
the hdtv technology manufacturing market ... and hdtv technology was
going to be the basis for all new technologies.

there was issues raised about the fine details of the hdtv spec which
might sway competitiveness one way or another. a problem was that
there were already some receivers manufactured outside the us that had
agile reception technology (same set could process all three major
analog conventions ... and work was being done that receivers could
dynamically adapt to handle any possible hdtv digital convention).

i.e. re:
http://www.garlic.com/~lynn/2005k.html#23 More on garbage
another aspect of the different dimensions of PAIN characteristcs as
part of security ... is the confidentiality required for credit card
transactions on the internet (or anyplace for that matter).

the information and that information was sufficient for performing
fraudulent transactions.

part of the issue ... is that the value of havesting things like
merchant transaction files is worth a whole lot more to the crooks
than the resources that are available to merchants for countermeasures
... in part because the information in the transactions logs are
required for a wide range of other business processes ... and you just
can't make the data totally disappear. ... part of the security
proportional to riskhttp://www.garlic.com/~lynn/2001h.html#61

i've often joked that you could completely blanket the planet in miles
deep encryption ... and there would still be account number leakage
because of their use in various business processes.

1) x9.59 transactions have to be authenticated
2) PANs used in x9.59 authenticated transactions can't be
also used in non-authenticated transactions

So the X9.59 PAN account numbers can still occur all over the place
... and harvesting of the information isn't sufficient for the crooks
to generate fraudulent transactions. The issue here is that the
business rule application of integrity then significantly minimizes
the requirement for privacy in order to provide security. It is also
somewhat a recognition that the pervasive business uses of PANs pretty
much precludes that any application of encryption would be sufficient
to close all the places where PANs can leak out.

Another approach that has been tried is one-time PANs ... once a
specific PAN has been used in an authorized transaction ... the same
PAN can't be used again in a different authorized payment transaction.
Before use, the PANs have to be kept secret ... but after each PAN is
used ... it can be utilized all over the place for numerous other
business processes ... but can't be used again for another financial
approval transactions. Again the issue is that once a PAN is used in a
transaction ... there are all sorts of other (many backroom business
operations) that subsequently need access to the PAN in connection
with various and sundry business processes. You can't make those
business processes go away ... and encryption can't be used to plug
all the possible leak points.

In the x9.59 scenario the appropriate use of end-to-end business
integrity ... significantly mitigates existing fraud prevention
requirements/need for preventing PAN information leagage (aka
confidentiality and privacy) ... since simple knowledge of the PAN is
no longer sufficient to perform fraudulent transaction.

IBM/Watson autobiography--thoughts on?

hancock4 writes:
Oh yes. The 1401 became the "bargain basement" computer and still
quietly marketed for a few years after S/360 came out. The boxes
still had a little bit of life in them.

somewhat by the late 50s ... it was starting to be realized that
software development and software coversions from one machine to
another was becoming a dominant market factor in the computer
industry.

the issue of coming up with a broad compatible computer line was an
attempt to mitigate such significant cost issues for the customer in
the future. that still didn't preclude that there was still going to
be a requirement for existing customers to convert whatever they
currently had to any new platform.

any existing implementations might even be expected to linger for some
time ... and customers might even find that it was cheaper to throw
hardware at the application ... than to covert it ... eventually
hoping for some future demise ... possibly because of (application)
obsolescence.

hancock4 writes:
I presume in the early days of computing the hardware was
so incredibly expensive compared to programmer wages that
software cost wasn't as big as a concern yet. Programmers
spent a heck of a lot of time shoehorning applications into
a tiny memory space, detecting and recovering from numerous
hardware errors, and pushing the technical envelope with fancy
tricks.

At some point along the way good programmers became scarce.
Further, at some point along the way the curves of the cost
of hardware vs. the cost of software crossed and attitudes
changed.

I remember a comp sci teacher telling us that Fortran
logical IFs were inefficient and to use arithmetic IFs
instead (never found out if that was really true on S/360
or B-5500. As a team leader, I pushed COBOL COMP-3 for
numeric fields and COMP SYNC for internal fields such as
subscripts. I still use that stuff but for modest sized
files it doesn't seem to make much run-time difference
with today's superfast machines. If my employer's mainframe
isn't real busy a complex job runs in less than a second! (And
this mainframe does the work of four older ones).

P.S. Saw the write up on your as a computer historian in
a recent IBM magazine. Neat article, congratulations!

btw ... has anybody actually seen a hardcopy? they had a photographer
come out for a photoshoot ... but pictures don't show up in the online
version.

... back to the thread ...

note that it wasn't necessarily either programmer salary or hardware
costs that were always the primary factor. whether or not an
application was available on the next larger machine (as the company
grew) could dominate all costs (or from the other veiwpoint ... costs
associated with lack of application availability could dominate).

we talked to various companies about what the costs would be if the
application was not available ... these examples are a little more
severe than some of the costs associated with lack of application
availability from the 50s (however, not having the cost savings from
some dataprocessing application can be turned around and viewed as a
loss ... and/or to justify hardware and programmer expenses).

one financial company that had an application that managed float on
cashflow ... claimed the application earned more in 24hrs than a years
lease on 50story office bldg (it was housed in) plus/and years salary
for every person that worked in the bldg. (conversely if the
application was not available ... they didn't earn that money).

another company with a several hundred million dollar datacenter
claimed if the datacenter was down for a week, the loss to the company
would be more than the cost of the datacenter (i.e. they easily
justified the several hundred million dollar expense for duplicating
the datacenter). this was in the era when we coined the terms
disaster survivability and geographic survivability to
differentiate from disaster/recovery.

for some topic drift ... there is a recent thread somewhat related to
how much availability should there be (do applications with privacy
requirements require equivalent availability requirements)
http://www.garlic.com/~lynn/2005k.html#23

"Deacon, Alex" writes:
Do you have any suggestions as to how the setting of these OCSP time
values should be done? I guess its not clear to me why you feel the
CA's need to agree on this. Why wouldn't the client simply make its
decision based on its local time (which I agree may be far from
correct) and the values in the response? Clients make these
decisions every day with certs, so why would OCSP responses be any
different? Is it the "producedAt" time that confuses the issue?

Regarding the various trust models, I agree there are too many
choices. The "delegated" trust model is the only one that really
makes sense in for large consumer facing PKI's in my opinion.

one of the issues in the CRL push model ... is that its the relying
party which is judging the risk (sort of the inverse of trust) ... and
they know the basis of their dynamic risk parameters ... one issue is
that as the value of the transaction goes up ... the risk goes up. the
other is that the longer the time interval ... the bigger the risk.

the problem was that since it is the relying party that is taking the
risk ... and understands their own situation ... it should be they
that decide the parameters of their risk operation ... i.e. as the
value of the transaction goes up ... they may want to reduce risk in
other ways ... which might include things like trust time windows.

in normal traditional business scenario ... the relying party is the
one deciding how often they might contact 3rd party trust agencies
(i.e. example like credit bureaus).

PKI/certificate operations have frequently totally inverting standard
business trust processes. instead of the relying party being able to
make contractual agreements and make business decisions supporting
their risk & trust decisions .... the key owner has the contractual
agreement with any 3rd party trust operation (i.e. the key owner buys
a certificate from the CA).

The digital certificate model has been targeted at the offline
business situation where the relying party had no other recourse to
the real information (sort of the letters of credit scenario from the
sailing ship days). This sort of continued to exist in market niches
where the value of the operation didn't justify the relying party
having direct and timely access to the real information. The problem
was that as the internet as become more & more ubiquitous and as the
cost of direct and timely access to the real information has dropped
... digital certificates are finding the low/no-value market segment
shrinking (as the cost of direct access to the real information drops,
relying parties can justify using real information in place of stale,
static certificates for lower & lower valued operations).

A problem facing a PKI/certificate model is that

1) business solution that was designed to solve a problem that
is rapdily disappearing ... relying party unable and/or couldn't
justify direct and timely access to the real information (in
lieu of stale, static certificate information)

2) tends to have been deployed where the contractual business
relationships didn't follow common accepted business practices.

>From a different standpoint ... rather than having propagated trust
pushed to the relying party ... the standard business model has the
relying party making the decision about the required level of
integrity and trust for the business at hand and then tends to pull
the information whenever economically and practically feasable.

The original PKI/certificate model was targeted at the market segment
where the relying party didn't have recourse that was practically
feasable (for timely and direct access to the real information). As
the practical issues of direct and timely access to the real
information have been deployed, PKI/certificate business operations
have attempted to move into the market segment where it may still not
be economically justified for the relying party to have direct and
timely access to the real information (and where the relying party has
direct business control over those operations).

However, with not only ubiquitous, online environment coming about
... but the rapid decline in the cost of ubiquitous online environment
... it is easier and easier for relying parties to justify direct and
timely access to the real information .... leaving the no-value market
niche for the PKI/certificate business operation. One business
downside is that when trying to address the no-value market niche
... it may be difficult to convince relying parties to pay very much
for certificates in support of no-value operations.

when tymshare was bought ... some number of things were spun-off. I
got brought in for due diligence evaluation on gnosis for the keykos
spin-off (somewhere in a box i may have some sort of gnosis
specification document)

forbin@dev.nul (Colonel Forbin) writes:
A high school student working as a waitron in a restaurant can
easily capture dozens of credit card numbers in a half hour,
including the expiry date and the "authentication code."

How much RAM is 64K (36-bit) words of Core Memory?

"Phil Weldon" writes:
On one side is core memory; multiple microsecond cycle time (cycle
because reading a core memory bit requires inverting, then
restoration hugely expensive ~ $0.20 pet bit per month many
different types of incompatible memory organization and data formats
bigger than a breadbox ( ~ 30,000 bits)

original 360 were 30, 40, 50, 60, 62, & 70.

60, 62 & 70 were going to have one mic core store (with one mbyte
... four "core boxes" ... four way interleave and 8byte
fetch/store). 70 was going to be hard-wired, faster version of the
60. 62 was going to be a 60 ... with virtual memory/dat-box.

before they shipped, 750ns core memory technology was developed, the
60, 62, 70 never shipped, and upgraded 65, 67, & 75 shipped with the
750ns memory.

How much RAM is 64K (36-bit) words of Core Memory?

"Phil Weldon" writes:
Also, in the epoch of core memory, great emphasis was placed on
programing to reduce memory requirements even at the expense of
execution time because memory was so expensive and I/O so slow.

If you want a number just divide the number of bits in a word by 8
to get rough equivalence. It won't mean much, but there it is. As
far as remember, 'core memory' had come and gone by the time the IBM
System 360 popularized 'byte'. In fact, 'core memory' had a pretty
short run.

was trading off relatively abundant i/o capacity against relatively
scarce and expensive real memory. indexes for files and libraries were
kept on disk and I/O search commands could be used to take strings
from main memory and look for the corresponding value out in disk
structures. a disk volume directory ... VTOC (volume table of
contents) would use multi-track search ... doing filename lookup
search for every file open operation. library files ... PDS (partition
data set) also used multi-track search of the library index to find
specific members in the libary.

my mid-70s, the trade-off started to change ... with i/o thruput
becoming a significantly more constrained resource ... and much larger
real memories were available. caching of indexes, data, files, etc are
taken for granted now (the reverse of the 60s, using abundant real
memory as trade-off to relatively scarse i/o capacity) ... but back
then, they remained all on disk.

i was once called into a large national retail operation that ran
large multisystem os batch environment that was regularly having
servere thruput problems. after looking at tons of data trying to
correlate disk useage and thruput across multiple systems (sharing the
same disks ... but each system only reporting their individual disk
useage activity) ... I started to zoom in on the problem.

turns out that the had a shared application library/PDS ... shared
across all systems. The library/PDS had a three 3330 PDS directory.
Everytime an application library member was fetch ... it had to do a
multi-track search of the 3 cylinder PDS directory. 3330 cylinder had
19 tracks ... avg. search 1.5 cylinders or approx. 29 tracks. 3330
spun at 3600rpm or 60revs/sec. multitrack search of 29 tracks took
just under .5seconds ... during which time the drive, controller and
channel were all busy. In this condition ... the avg. application
member loading per second was just under two ... this was aggregate
across all machines in the datacenter (all bottlenecking on the same
disk shared library). PDS and multitrack search are still around
... as opposed to other environments that make extensive use of
electronic memory for caching of disk and file structures as well as
actuall data.

in the late 70s i was started to make comments about the drastic
reduction in disk relative system thruput ... at one point making the
observation that disk relative system thruput had declined by a factor
of 10 over a period of 15 years. the disk division didn't care for
this and assigned their performance group to refute the statement.
after a couple months, they came back and essentially said that i had
slightly understated the problem (aka if processor and memory had
increased by factors of 50 and disk had increase by factor of less
than five ... then the disk relative system performance had declined
by a factor of 10).

Determining processor status without IPIs

Joe Seigh writes:
Is there a way in theory to determine the running/not running status
of other processors of other processors without resorting to a
probably expensive IPI operation? This is in the context of a
virtual environment where the processors are virtual and may or
may not be running on a real processor at the moment. An obvious
solution would be to provide a hypervisor call to provide the virtual
processor status but you have the problem of multiple VM OSes and
they're barely keeping up with basic minimal simulation as it is.
It would be nice if Intel and AMD were a little proactive in the
virtualization area and architected it in as part of the basic
architecture rather than after the fact with a too little, too late
solution.

in the 60s and early 70s ... there were some number of programs that
thot they were "stand-alone" on the real machine and do things like
TIO busy loops ... or something similar. several things like TIO would
enter the hypervisor kernel for emulation in any case (and special
traps were inserted to special case some of the more onerous cases).

the mainframe virtual machine microcode performance assists eventually
evolved into also providing the "LPAR" subset. the performance assists
still works for the hypervisor operating system ... but a subset of
the hypervisor operating system has been instantiated in the microcode
of the basic machine ... and makes it possible for installations to
partition the machine for production operation. besides the
virtualizing tricks for pure hypervisor kernel operation ... quite a
few also provide for operation in the LPAR environment.

Determining processor status without IPIs

Joe Seigh writes:
I'm aware of some of the things VM and its guest machine used to do
since I worked in VM development at one time. Guest machines knew
if they were running on a virtual processors and would use spin wait
loops with hypervisor calls to preempt and not waste cycles
spinning. The was feasible since there was only one VM hypervisor
and the guest OS were all by the same vendor, IBM.

later on things somewhat improved ... but in the early days ...
cambridge and cp67 (and even into the vm370 days) were frequently
viewed as the enemy in many quarters ... in part because they appeared
to be an internal operation competing with "strategic" efforts for
internal resources ... external competiters were sometimes treated
better than internal competitors.

i worked on some number of projects where the effort I was on, some
construed in competition with official, corporate "strategic" effort
... and the official corporate "stategic" would prefer to subcontract
to an outside entity (which provided no internal competition) than
deal with another internal operation.

also ... while unbundling had been announced on 6/23/69 ... kernel
software was still free ... and all sorts of people were in the habit
of extensively modifying kernel source. it wasn't until into the
mid-70s that transition started being made to licensing and charging
for kernels ... and that happened incrementally. when i was doing the
resource manager ... it got selected to be the guinea pig for
licensed/priced kernel software (and i got the prize to spend six
months on and off with the business people working on the business
rules for pricing kernel software).

Lawrence Statton N1GAK/XE2 writes:
Hell, Barb -- I can't remember what I had for breakfast yesterday,
much less what flight I took 16 years ago[1] :). I can guess that it
was probably American -- they had a routing BOS/SLC/SJC that I used
for most of my Calif <--> Boston travels. On the other hand, I was
going to stay with friends in Oakland, so I might have flown into OAK,
which case I have no idea what carrier I took. On the gripping hand,
I also had a soft spot for Delta, because they served Pepsi products,
and at that point in my life I was quite brand-loyal. Now I'm old and
my taste buds have all died, so I'll drink just about any brown cold
fizzy beverage. Except Moxy.

at the height of the internet bubble, american put in non-stopes
between sjc and bos as well as sjc and aus.

in the early 80s, i use to take twa #44 red-eye sfo to kennedy a
couple times a month on monday night and return on twa #857(? ... the
tel aviv, rome, kennedy, sfo flight) on friday afternoon. twa went
backrupt and then i switched to panam (for the monday night
redeye). panam then sold its pacific fleet to united to concentrate on
atlantic routes. i then switched to american for the monday night
redeye ... although sometimes took united (both my twa miles and my
panam miles just evaporated).

i do blame twa for direct (non-connecting) flights with change of
equipment. in the very early 70s, the overnight parking fee at sfo was
such that it was cheaper for twa to fly the short hop to sjc and park
the plane overnight there. then in the morning the plane would fly
back to sfo. on the sjc->sfo leg, it carried two flight numbers ...
one where the equipment continued to seattle ... and the other was a
change of equipment in sfo that went to kennedy. the explanation was
that on reservation screens all the direct (non-connecting) flights
are listed first ... followed by connecting flights. The change of
equipment gimmick ... managed to get "connecting" flights listed at the
front along with all the other direct flights (direct flights had a
much higher probability of being reserved than connecting flights that
appeared later on the screen). After that, i started to notice some
number of other flights (typically with multiple flight numbers) that
involved the "direct" gimmick with change of equipment.

Determining processor status without IPIs

here is recent posting of stuff i did as undergraduate for cms when
running running in a virtual machine ... to optimize virtualizing
overhead (all the source was shipped and lots of customers could make
modifications ... this was also before the unbundling announcement)
http://www.garlic.com/~lynn/2005j.html#54 ALLOC PAGE vs. CP Q ALLOC vs ESAMAP

there are a couple other issues when there is paging going on.

1)

if the hypervisor is paging virtual machine address space ... and the
virtual machine is a multi-tasking operating system ... then it is
possible to reflect some sort of psuedo page fault to the operating
system running in the virtual machine ... potentially allowing the
virtual operating system to task switch. one of the univ. running cp67
with mvt ("real" address space multitasking batch operating system)
modified cp67 to reflect psuedo page faults to mvt ... and mvt to
accept the interrupt and attempt to task switch.

ibm in the mid-70s did something as part of the 148 ecps project for
vs1 ... where they defined a psuedo pagefault function for vm/370 and
modified vs1 to utilize try and task swtich

if the hypervisor is paging virtual machine address space ... and the
virtual operating system might also be doing paging ... it is possible
to get into LRU page replacement conflict. The virtual machine
operating system might be searching for the least recently used page
to replace and assign to something else. The hypervisor may also be
looking for the least recently used page to replace. You can get into
some pathelogical situations where the hypervisor selects and replaces
a virtual page ... that the virtual guest operating system has just
decided to start using for some other purposes. The idea behind LRU is
that the least recently used page is supposedly going to be the least
likely used page in the future. A LRU algorithm running in a virtual
machine effectively invalidates that assumption ... the least recently
used page is highly likely to (also) be selected by the virtual
operating system for immediate use.

Determining processor status without IPIs

Andi Kleen wrote:
On the other hand shadow page tables have trouble managing
the Dirty bits properly (or rather if you want to manage
them properly you have to eat a lot of additional faults),
so if you have guests that rely or better perform better with accurate
dirty bits then they are better.

the virtual machines virtual address space tables did the mapping
from "3rd level" to "2nd level" ... when actually running a virtual
address space belonging to the virtual machine ... it used shadow
page tables to do the "3rd level to 1st level" mapping. The
change/update
processes for the shadow page tables followed the architecture rules
for the hardware TLB.

2nd level memory had two sets of referenced&changed bits. cp67 could
move virtual machine pages into & out of real memory. cp67 maintained
reference and changes bits for virtual pages that were actually
resident in real memory. in addition there are the virtual
reference&change bits ... the (virtual) state of the (virtual) pages in
the "2nd level" space.

In effect there were the real hardware reference and changes bits and
two sets of "backup" reference and change bits (one for the hypervisor
kernel and one for the virtual machine).

Anytime the hypervisor did a change to the real reference and change
bits .... the current value of the real hardware reference and change
bits were OR'ed to the virtual machine backup bits, the real hardware
bits reset to zero ... and the hypervisor bits setting placed in the
hypervisor backup bits.

Anytime the virtual machine did a change to the real reference and
change bits ... the current value of the real hardware R&C bits were
OR'ed to the hypervisor backup bits, the real hardware bits reset to
zero ... and the virtual machine bits setting placed in the virtual
machine backup bits.

Anytime the hypervisor interrogated the R&C bits, it OR'ed the real
hardware bits with the hypervisor backup bits

Anytime the virtual machine interrogated the R&C bits, it OR'ed the
real hardware bits with the virtual machine backup bits.

IBM/Watson autobiography--thoughts on?

John R. Levine wrote:
>But I am surprised that the _incremental_ cost of CPU time to pre-sort
>an input file would be greater than manual sorting, especially when
>the sorted file would be stored on a temporary disk file.

I'm not. In that era, there was often a meter on the CPU that timed
how much you used it (wall clock time), so the incremental rate was
the same as any other rate.

basically leases were somewhat like cellphone billing ... basic plan
and possibly a lot for overages ... based on cpu meter.

the meter ran while the processor wasn't in wait state and/or when the
channels were active.

one of the things that enabled 7x24 time-sharing service cp67 was the
conversion to "prepare" command for telephone lines. typically
timesharing users were billed for cpu time used ... but the datacenter
was billed for the cpu meter running (which could also run when the
processor was idle ... but the channel was active). the prepare
command to terminal controller ... basically told the controller to
wait for input from the terminal ... but disconnect from the
channel. prior to having the "prepare" command in the channel program
... the channel ran just waiting for terminal input (and the cpu meter
ran even if nothing else was going on).

having prepare command ... allowed service to be up & running and
ready for user activity ... but otherwise have the cpu meter stop (and
not incurring any leasing charges) when the system was otherwise idle
(and therefor not earning any revenue from users).

the 370s still had cpu meters ... the big conversion from lease to
purchase still hadn't happen.

the meter on the 370 tended to "coast" for 400milliseconds ... after
everything had otherwise stopped ... aka both cpu and channels had to
be idle for more than 400 milliseconds for the cpu meter to actually
stop. quess which operating system had a kernel process that would
wake up every 400 milliseconds?

Jukka Aho wrote:
I think the point in this discussion has been that, without referring to
the actual standard number or designation, saying that something is
"ANSI" does not mean anything at all. ANSI X3.64 ("Control Sequences for
Video Terminals and Peripherals") is one thing, ANSI X3.4-1968
("American National Standard Code for Information Interchange (ASCII)")
is something completely different, ANSI H35.2 ("Dimensional Tolerances
for Aluminum Mill Products") is yet another thing again. :)

As far as I know, there is no ANSI-issued standard for IBM Codepage 437.

ibm mainframes had an additional issue with ascii ... which we
discovered when we were fitting up an interdata/3 to have a mainframe
channel adapter card and programmed to emulate an ibm terminal
controller.

ibm terminal controllers had the convention of storing the leading bit
off the line into the low-order bit position of a byte ... rather than
the high-order bit position of a byte. as a result when terminal ascii
appeared in the memory of mainframe processor ... all the ascii
"bytes" were bit-reverse. as a result the mainframe ascii<->ebcdic
translate tables were for bit-reversed ascii bytes. one of the early
tests of the interdata/3 terminal controller emulation had ascii bytes
being transferred to the 360 memory non-bit-reversed and coming out
garbage after being run thru an ibm bit-reversed ascii->ebcdic
translate table.

early "wheeler scheduler" that i did as undergraduate ... went into
cp67 ... and then dropped in the morph to vm370. i had done a bunch of
"virtual memory management" stuff on cp67 at the science center.
http://www.garlic.com/~lynn/subtopic.html#545tech

a small subset of that was incorporated into base vm370 release 3. then
the resource manager ... including the wheeler scheduler ... and a
whole bunch of other stuff that included a bunch of restructuring for
multiprocessor work ... much of it had been done for the multiprocessor
VAMPS project (which was canceled before being announced)
http://www.garlic.com/~lynn/submain.html#bounce

the resource manager was the guinea pig for first charged for kernel
code.

full smp multiprocessor was released in vm370 release 4. the problem
was that it was dependent on a bunch of restructuring stuff that i had
smp and was already out in the resource manager. the problem was that
basic kernel stuff related directly to hardware stuff was still free
... and you couldn't have free kernel (smp stuff in release 4)
dependent on priced for software (lots of stuff in the resource
manager). to resolve this ... abo
ut 80-90 percent of code from the
resource manager was merged into the "free" kernel. come release 5 ...
the remaining resource manager code (including "wheeler" scheduler) was
combined with multiple shadow table support and a couple other things
for "HPO".

the base kernel was still free ... but the "add-ons" were priced
software.

the original cp67 support for virtual machines that supported (virtual)
virtual memory ... just kept a single set of shadow page tables around
(per virtual machine). The initial HPO code (combined with what had
been called the resource manager) kept around multiple sets of shadow
page tables (per virtual machine, when running mvs with multiple
virtual address spaces ... you didn't have to completely invalidate all
the shadow table entries whenever mvs switched address spaces ... you
could keep around more state information)

Determining processor status without IPIs

Andi Kleen wrote:
I assume you mean the hypervisor with virtual machine here.

The real problem these days seems to be more to manage them for
the guest OS. Guest OS do swapping fine on their own and adding
another layer in the hypervisor would be just wasteful. And the machines
have enough memory to run quite a lot of guests.

If you need to steal pages from a guest use a custom balloning
driver in the guest that allocates memory and gives the pages
it allocated back to the hypervisor. That seems to be
more efficient than having two different swapping mechanisms
in guest and hypervisor be fighting with each other.

depends ...

at one extreme is today's mainframe machine hypervisor called LPARs
(logical partitions), it is built into the hardware/microcode of the
machine ... supporting a limited number of partitions (virtual
machines) and many customers now run in LPAR mode as part of their
production/normal operation. LPARs support limited number of virtual
machines with contiguous, dedicated real storage (using base/bound
relocate for virtual memory).

sort of at the other extremee is a posting several years of somebody
having created 40,000+ linux virtual machines under the virtual machine
software hypervisor. the address space of these virtual machines were
paged (no dedicated real storage) the virtual machine software
hypervisor happened to be running in a "test" LPAR (i.e. in a LPAR
defined virtual machine with limited assigned storage and processor)
... couple old posts referencing the 40,000 linux virtual machine
scenario.
http://www.garlic.com/~lynn/2002n.html#6 Tweaking old computers?
http://www.garlic.com/~lynn/2004e.html#26 The attack of the killer mainframes

a flavor of the 40,000 linux virtual machine scenario (usually only a
couple thousand) has been used for webserver farms ... where somebody
gets their dedicated webserver virtual machine ... isolated/partitioned
from other entities. works for webservers that have relatively light
to medium loading.

Book on computer architecture for beginners

John R. Levine wrote:
I like Blaauw and Brooks, "Computer Architecture", Addison-Wesley, 1997.

It's another 1200 page brick, but it's a nice complement to Hennessy
and Paterson, more descriptive and historical. Half the book is a
taxonomy of interesting historical computer designs, giving a unique
view of how we got to where we are. The authors are old IBMers but
the coverage is not unduly slanted toward IBM. Brooks is the same
Brooks who wrote the software classic "The Mythical Man-Month."

... from above
One of the first jobs for the staff of the new Center was to
put together IBM's proposal to Project MAC. In the process,
they brought in many of IBM's finest engineers to work with
them to specify a machine that would meet Project MAC's
requirements, including address translation. They were
delighted to discover that one of the lead S/360 designers,
Gerry Blaauw, had already done a preliminary design for
address translation on System/360.(15) Address translation
had not been incorporated into the basic System/360 design,
however, because it was considered to add too much risk to
what was already a very risky undertaking.

..... and
(15) G.A. Blaauw, Relocation Feature Functional
Specification, June 12, 1964. "Nat Rochester (one of
the designers of the 701) told us, 'Only one person in
the company understands how to do address translation,
and that's Gerry Blaauw. He has the design on a sheet
of paper in his desk drawer.'" (R.J. Brennan, private
communication, 1989.)

before doing release 4 support for 158 & 168 (two-processor) smp,
there were two other projects (that were never annonced), VAMPS (a
5-way smp ... implemented with lower level 370 processor that didn't
have caches ... and so there wasn't a cache problem) and logical
machines (a 16-way smp using 158 engines that didn't implement
standard cache consistency ... only compare&swap and a couple other
operations).

the strong cache consistency of 370 ... gave problems when they tried
to tie a pair of two-processor 3081s into 4-way 3084. the 370 flavor
had cache running the same speed as machine cycle. later machines
going to larger number of processors started running the cache had
much faster cycle speeds than the rest of the infrastructure (so the
cross-cache invalidation slow-down didn't slow down the processor
cycle speed).

aka swing the pendulem from the extreme future system hardware
complexity to extreme hardware simplicity of 801/risc ... where there
were even statements about trading off increased software complexity
in 801/risc for simpler hardware.
http://www.garlic.com/~lynn/subtopic.html#801

the strong cache consistency problems in 370 influence "harvard"
architecutre and separate I & D caches w/o any provision for cache
consistency (even between I & D caches on the same chip). oak
... which was a 4-way (6000) processor complex with shared memory
didn't provide for cache consistency. There were two modes ... a
virtual "segment" was either defined as "cached" and not consistent or
a virtual "segment" could be tagged as consistent and never "cached".

jmfbahciv@aol.com wrote:
Query: Did you begin to think of the CPU as a device and your
equivalent to our CPNSER as a CPU driver? I always found that
people who didn't think of CPNSER as a driver had enormous troubles
with the concept of user thruput. I never had enough data to
conclude this p.o.v. was key to misunderstandings about what
a SMP is supposed to be.

it was canceled w/o being announced and predated the work on turning
out vanilla smp support in standard vm370 on standard 370 smp. it is
also where i came up with the idea that was sort of a global kernel
lock ... w/o the spin-lock characteristics ... not having to rework
the whole kernel for fine-grain locking ... just certain critical
paths ... leaving the rest behind a global kernel lcok.

however, in the VAMPS case ... must of the code in the later vanilla
370 impleemntation that was reworked for fine-grain smp locking
... was actually moved into the microcode of the hardware
architecture. most of dispatching, initial interrupt handling and some
number of additional privilege instruction simulation (in software in
normal vm370) was moved into microcode of the machine. in some sense
abstracting the smp dispatching into the hardware of the machine
... was akin to what i432 tried to do later.

in VAMPS, if a processor couldn't continue with the virtual machine
... it would attempt to interrupt into the kernel ... if the kernel
was already busy on another processor ... it would queue and "enter
kernel" interrupt and go off to look for another virtual machine to
dispatch. standard vm370 at the time was spending nominally spending
50-60percent of the time in virtual machine execution and 40-50% in
the hypervisor kernel. for VAMPS to be successful ... the amount of
time spent in the serialized hypervisor kernel code had to be reduced
to less than 25percent (so four processors worth of virtual machine
execution didn't saturate 100 percent of single serialized kernel
processor)

besides abstracting a lot of transition between hypervisor and virtual
machine execution into the microcode of the machine (and parallelized)
... a lot of disk i/o process handling was abstracting and offloaded
out into the microcode of the disk controller. Some of this was
similar to what was later done in 370/xa for queued i/o interface
... but also for disk could recognize reordering queue and combining
operations to optimize disk arm/head operation. this also helped to
reduce the pathlength left in the serialized kernel.

when VAMPS was killed then the design was retargted to a purely
software implementation. the objective was to have a basically
serialized kernel design with global kernel lock ... the highest used
part of kernel software was parallelized outside of the global kernel
lock ... allowing for 1) maximum amount of parallelization for the
minimum amount of smp code changes and 2) rather than have spin lock
... have sufficient parallelization so that interrupts into the kernel
could queue request when a kernel lock was entered ... rather than
having a global kernel spin lock implementation.

VAMPS was aggresive about abstracting away how many real processors
actually existed ... the basic serialized kernel just placed tasks on
queue and took stuff off another queue. whatever processors there
happened to be ... would pull work off the task queue (placed there by
the hypervisor) and run until it absolutely needed serizlied kernel
service ... and then it would place work on kernel queue (and
potentially go off to see if there was other work to do).

Determining processor status without IPIs

glen herrmannsfeldt wrote:
As I understand it, IBM's VM and OS/VS1 have a way for the guest OS
to regain control while one task is paging to dispatch another task.
I don't know if others supply this ability.

page fault handshaking ... where vm would try to reflect to vs1
operating system that vm was handling for the vs1 virtual machine ...
under the assumption that the fault might be for a specific task (in
the vs1 multi-tasking environment) ... allowing vs1 a chance to switch
tasks.

it had been done along with ecps for virgil/tully ... an endicott
effort for the 138/148 ... attempting to make the environment nearly
"vm only" ... i.e. customers would never run the machines in a non-vm
environment ... something akin to the current proliferation of LPAR
production environments in customer shops. misc. past posts about
microcode related enhancements
http://www.garlic.com/~lynn/subtopic.html#mcode

from above:
The process of making guest systems perform better began as
soon as the customers got their hands on CP. Lynn Wheeler
had done a lot of work on this while he was a student at
Washington State, but he was by no means the only one who
had worked on it. The CP-67 Project had frequently
scheduled sessions in which customers reported on
modifications to CP and guest systems to make the guests run
better under CP. These customers had measured and monitored
their systems to find high overhead areas and had then
experimented with ways of reducing the overhead.(108)
Dozens of people contributed to this effort, but I have time
to mention only a few.

Dewayne Hendricks(109) reported at SHARE XLII, in March,
1974, that he had successfully implemented MVT-CP
handshaking for page faulting, so that when MVT running
under VM took a page fault, CP would allow MVT to dispatch
another task while CP brought in the page. At the following
SHARE, Dewayne did a presentation on further modifications,
including support for SIOF and a memory-mapped job queue.
With these changes, his system would allow multi-tasking
guests actually to multi-task when running in a virtual
machine. Significantly, his modifications were available on
the Waterloo Tape.

Performance and Capacity Planning

jmfbahciv@aol.com wrote:
Would you give an example or two of your critical paths?
It just occurred to me that these might be different because of
OS philosophy differences. I'd always assumed that these would
be the same, no matter what was running on the hardware.

when i was an undergraduate ... i did a lot of path rewrites of stuff
that i thot would likely be high-use ... as well as doing "fastpath"
... special case path thru the code for the most common case. some of
that reduced pathlength by factor of 100 times ... and for some general
benchmarks ... overall reduction of 80-90 percent. old reference to
presentation at gave at Atlantic City share, aug.
of 1968:
http://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14
http://www.garlic.com/~lynn/94.html#20 CP/67 & OS MFT14

basically we were told that there would by 6000 bytes of microcode for
ecps (vm microcode performance assist) .... that kernel type cp code
would drop aproximately byte-for-byte from 370 code into machine
microcode ... and that machine micrcode would run about ten times
faster than 370 code (i.e. the native machine micrcode engine had
about a 10:1 instruction ratio emulating 370 ... so dropped directly
into microcode).

in the VAMPS scenario, it was slightly more structured than in the
ecps scenario ... where everything was a candiate. in VAMPS there was
some logical construction limitations ... i.e. where in the processing
path the code actually was.

the top two pathlengths in the ecps study (referenced above), i had
extensively optimized repeatedly over the years ... and they still
accounted for 15 percent or so of kernel time ... and they were also
in area that could also be placed for VAMPS.

the top item is selecting a virtual machine to run ... loading up all
the information for running that virtual machine ... and then
dispatching the virtual machine. with the queued interface for VAMPS
... the kernel just put tasks on queue for dispatching. the processor
microcode selected something on the dispatch queue, locked the queue
entry, loaded up the information to run ... and ran it.

the 2nd entry (in the ecps study) was entry to the kernel typically
because of 1) page fault or 2) privilege instruction interrupt ...
requiring the kernel to simulate on behalf of the virtual machine. in
both VAMPS and ecps ... some amount of the microcode involved in
execution of privilege instructions ... was enhanced to recognized
virtual machine mode ... and as a result would directly execute a
"privilege" instruction using virtual machine rules ... bypassing
having to enter the hypervisor kernel for simulation of the privilege
instruction. for other things that actually required entry into the
hypervisor kernel, the VAMPS code would directly handle a lot of the
status storing away and attempting to obtain the kernel lock ... and
actually entering the kernel. If the VAMPS microcode was block from
entering the kernel ... it would queue a super light-weight task
against a kernel queue ... and go off to the (microcode) dispatcher to
run another task.

from the kernel standpoint ... it saw putting things on a queue
... and re-entry as part of pulling something off the queue. it never
actually saw low-level interrupts ... typical of real 360/370
... and/or spinning for a kernel lock; a processor either entered into
kernel mode (because no other processor was currently in kernel mode)
... or it queued a request for kernel mode and went off to see if
there was other (non-kernel) work to do.

Performance and Capacity Planning

jmfbahciv@aol.com wrote:
IIRC, the only reason JMF invented his spin lock is because
KL caches were not write-thru and the other CPU had to wait
for the data to get to memory. That's why our SMP has a
cache sweep serial number.

I may very well be confused here because all of this is based
on my memory of conversations.

360 had "test&set" instruction for locking convention ... basically
test a byte for zero ... set it to one if zero ... and indicate
instruction condition code ... it was defined as a serialized atomic
operation ... across multiprocessors and caches.

charlie was doing a lot of work on cp67 mutliprocessing support and
fine grain locking and came up with a new instruction ... it was given
the name compare and swap ... because CAS are charlie's initials
http://www.garlic.com/~lynn/subtopic.html#smp

attempting to justify for 370 ... the 370 architecture owners in pok
... said that it wouldn't be possible to justify a new instruction for
370 based solely on multiprocessor use (the view around pok was that
test&set was sufficient for multiprocessor operation). as a result,
the atomic operations in a multithread environment (either
multiprocessor or single processor) were invented. several examples
were come up with where multi-threaded applications could perform
various kinds of operations w/o having to resort to kernel calls to
serialize (if in a single processor environment).

compare&swap was defined as being atomic and serizlizing ... across
any number of processors and any kind of cache structure.

Performance and Capacity Planning

jmfbahciv@aol.com wrote:
Sheesh. They must have been spending all their knocking each
other up. My gut feel says that 25% is too much.

the issue in cp67 was that all time was accounted for .... while in
virtual machine mode ... it was all charged to "problem state" of the
virtual machine. there were lots of reasons to enter the kernel and
"supervisor state" ... lots of things could be performed in the kernel
on behalf of the virtual machine in "supervisor state" ... which would
also be charged against the virtual machine .... doing things on
behalf of the virtual machine.

When i originally started on cp67 there was some non-linear code
(linear scanning certain kinds of lists) that grew proportional to the
number of tasks and virtual machines. At 35 virtual machines it was
hitting 15-20 of total cpu (all kernel supervisor) and not charged to
a specific virtual machine ... aka two kinds of kernel supervisor
state ... that having to do with general system bookkeeping not
charged to a specific user (and had to be amortized across all users
as "overhead") and kernel supervisor state that was charged directly
to virtual machine associated with kernel activity done directly on
behalf of the virtual machine.

when i restructured various paths in the system ... i did away with
most every linear scanning for overhead ... reducing cp67 "overhead"
to possibly half percent of elapsed time ... even with 75-80 users.

the issue in global kernel spinlock ... was that the standard state of
the art for the period ... was that on entry to the kernel (for
whatever reason) the kernel interrupt code would spin on the global
kernel spinlock ... until that processor obtained the spinlock and
could proceed. only one processor could be executing in the kernel at
any point in time.

the logic redo for VAMPS ... and then later ported to a purely
software implementation ... moved the kernel lock well past the basic
interrupt routines ... so that a much smaller portion of the kernel
was serizlied, being only able to execute on a single processor at a
time. the other VAMPS change ... and larter morped to a purely
software implementation was that the global kernel serialization lock
wasn't a spinlock ... i initially referred to it as a bounce lockhttp://www.garlic.com/~lynn/submain.html#bounce

a processor when it needed certain kinds of kernel function would
attempt to obtain the kernel serialization lock ... if it obtained it
... it would proceed as normal. if it failed to obtain the kernel lock
... it would queue a super light-weight thread against the kernel lock
and go off and look for something else to do.

so access to certain serialized kernel functions could only be
performed on one processor at a time (although on a moment to moment
basis ... it could be any processor in the complex acting as the
kernel server). Since all requests for those kernel services were
serizlied on a single processor at a time ... that means that the
total service time available for performing those services is 100
percent of a single processor (couldn't have more than 100 cpu seconds
aggregate of serialized kernel time per 100 seconds of real time)

So you have somewhat standard operations research analysis. Say you
have four processor system .... where for every 100 seconds of general
execution ... you needed 25 second of global serizlized service. With
four processors ... there would be 400 seconds of execution in 100
seconds of real time and generating 4*25=100 seconds worth of
serialized workload in 100 seconds of real time. With five processor
system ... that would result in no processor having to wait on
serialized kernel processing. The VAMPS microcode changes actually
reduced it to less than 25 seconds of serialized kernel processor per
100 seconds of non-kernel processing .... but the requirement was that
it had to be reduced to at least 25 seconds of kernel processor to
keep the infrastructure from waiting on kernel services.

now in standard global kernel spin-lock impleemntations of the period
... you didn't actually see processors in wait state for kernel
services ... they were all spinning on the kernel spin-locks ... if
there was greater demand for serizlied kernel services than 100percent
of a single processor.

in the VAMPS ... and later software implementation case with the
bounce/queued implementation ... a processor that couldn't obtain the
global kernel spin-lock, instead of spinning would queue a super
light-weight request (for kernel services) ... and go off and look for
other, non-kernel work to do. If it couldn't find other non-kernel
work to do, it would enter wait-state ... but it wouldn't be in a
tight compute-bound spin-loop.

the super light-weight queueing mechanism was such that on cache
machines ... any overhead of actually doing the queue/dequeue
operations was more than offset by maintaining kernel instruction
cache locality on the same processor (it actually ran faster).

in VAMPS, the global kernel lock metaphor ... just precluded more than
one processor at a time executing in the kernel. in the morph to the
pure software implementation ... some amount of the kernel was
parallelized, maybe a 1500-2000 instructions worth, the rest was left
behind a serialized kernel lock. however, the 1500-2000 instructions
that were parallelized were the highest use instructions and also
implementated the queue/dequeue operations ... allowing processors that
were blocked from entering the kernel to go off and attempt to do
non-kernel work (rather than spinning on a lock).

standard 360/65 just had shared memory ... but not shared i/o. 360/65
attempting to simulate shared i/o with device controllers that were
attached to multiple channels (aka the 360 i/o buses) 360/65 hardware
configuration was frequently configured so that the same channel
address on both processors connected to the same control unit at the
same address.

the 360/67 smp was different ... it had a channel controller box that
allowed all channels to be connected-to & addressed-by all processors.

370 reverted to the 360/65 smp model ... requiring shared i/o
simulation by having shared controller connection to processor unique
channels. you didn't see the return to common channel addressing until
370/xa on the 3081 (the 360/67 also had 32-bit virtual addressing
mode, 370 only had 24-bit virtual addressing mode ... it also wasn't
until you got to 370/xa on the 3081 that 31-bit virtual addressing was
introduced).

370/158 and 370/168 offered a cost reduced, computational intensive
two-processor option ... where only one processor had connected
channels for i/o. in that scenario ... any application running on the
processor w/o i/o connectivity and requested kernel i/o services ...
the request had to be handed off to the processor with i/o
connectivity. This configuration also tended to have a somewhat
unanticipated side-effect that the cache-hit ratio on the processor
w/o i/o connectivity tended to go up ... and therefor got more work
done. the cache hit ratio on the processor with i/o connectivity
could also slightly improve ... because in the two processor case
... the kernel i/o code would only be executing on one processor
... increasing the probability of local cache hit ratio for that part
of the kernel.

Determining processor status without IPIs

Eric P. wrote:
So it seems that in the True VM model the host must never
punt page fault exceptions to the guest. This would limit the
ability to test guest OS's. The host does punt timer interrupts
to the guest, which the guest uses to trigger scheduling of
applications running on it.

so the cp67/mvt and the vm370/vs1 scenario they had some additional
information about the virtual guest. the program status word (PSW) had
virtual supervisor/problem state and some other indicators. when the
virtual PSW was in (virtual) supervisor state .. you were pretty sure
that the operating system was in the kernel ... and so it wouldn't do
much good. It was only when the virtual PSW was in (virtual) problem
state that you were reasonably sure the guest operating system was
running in application space. there was also hints about whether or
not the virtual PSW was enabled for (virtual) io interrupts or not,
etc.

in the cp67/mvt case ... the mvt guest believed it was running "real"
w/o virtual address space support. the mvt guest got reflected a
psuedo page fault interrupt from the hypervisor ... which it treated
like a need for mvt to suspend the current application and see if it
could task switch.this whole processing increased the overhead of
providing virtual machine simulation ... but it allowed a guest
operating system to get higher thruput than it would have if it was
blocked from execution while the hypervisor handled a page fault on
behalf of the virtual machine. note that virtual page fault
handshaking was an optional feature that could be turned on/off for
specific virtual machines.

it was a little more complicated in the vm370/vs1 case ... the vs1
guest could be running a virtual address space ... i.e.

there would only be an issue with regard to vm370 having a page fault
for 2nd level virtual addresses ... which it could reflect to the
virtual machine thru page fault handshaking.

the issue of running a page replacement algorithm under a page
replacement algorithm is a configuration and load issue ... and can
result in some pathelogical performance issues that many might believe
to be inexplicable (w/o understanding some of the underlying
assumptions behind page replacement algorithms).

this was significantly mitigated in the vm370/vs1 scenario. vs1 was a
minimal translation of earlier batch system to virtual address space
environment. basically vs1 created a single virtual address space ...
and then for the most part pretended it was running on a real machine
with real storage of the size of the virtual address space (typical
configuration was 4mbyte virtual address space size running on a real
370 with 512k bytes). in the page handshaking scenario ... vm370 might
provide a 4mbyte virtual machine address space size ... and vs1 would
define a single virtual address space where the size exactly matched
the virtual machine storage size. in this scenario while vs1 was
running with a single virtual address space ... it wasn't doing any
paging (or page replacement) ... since the single virtual address
space size and the virtual machine address size was the same.

this had several advantages ... besides avoiding the page replacement
under page replacement scenario. vs1 used 2k page options that was
originally selected for the smaller real storage 370 sizes originally
introduced ... i.e. compacted better on really small real storage.
vm370 used 4k page sizes ... which didn't compact as well on really
small real storages ... but could cut the number of page faults in
half (since twice as much was being transferred at a time) ... and was
less of a "compaction" issue was larger and larger real storages sizes
become available on 370.

a "large" 370/145 had 512k storage. vm370/vs1 handshaking was
introduced in the same time as ecps support for virgil/tully (138/148
follow-on to 135/145). typical 370/148 had 1mbyte of real storage
(twice a large 370/145).
http://www.garlic.com/~lynn/subtopic.html#mcode

the other issue was that i had exceedingly optimized the end-to-end
pathlength for turning a page (the pathlength efficiency and accurracy
of page replacement), as well as whole io pathlength, task switch
overhead, etc.
http://www.garlic.com/~lynn/subtopic.html#wsclock

so in addition to possibly cutting the number of page faults in half
when vs1 let vm370 do its paging (using 4k pages instead of vs1 native
2k page sizes) ... my total pathlength for doing a page fault (whether
2k or 4k size) was possible 1/5th to 1/10th the total pathlength that
it took a vs1 kernel to handle a page fault (whether 2k or 4k
size). This 1/5th to 1/10th value is for straight 370 instruction
comparison ... and doesn't include the additional kernel ecps
microcode performance assist done for vm370 on virgil/tully machines.
If doing 1/2 the page faults, each one at 1/10th the path length
... that yields about 1/20th the overall pathlength. specifically
on virgil/tully, the ecps microcode assist might represent an
additional improvement of 2-4 times ... say 1/40th to 1/80th.

R.S. wrote:
Obviously no. Everybody talks about it but almost nobody do it. I mean
i.e. tape encryption. Regardless of tapes, SSL's, encrypted networks,
VPN's somewhere at the end the data appears in unencrypted
form. Otherwise it is useless. And this end could be the weakest
link! A human seats there and reads the data, maybe copies it, maybe
the copy is illegal...

recent thread in sci.crypt & comp.arch on all aspects of security have
to be applied in equal strengths. however, this discussion has an
example where having strong authentication eliminates the possibility
of using information obtained thru evesdropping for fraudulent
purposes. if you can't use such harvested information for fraudulent
purposes ... then it significantly mitigates the requirement for
encrypting the information ....

one way of classifying various components of security is PAIN
P ... privacy ... or sometimes CAIN & confidentiality and encryption
A ... authentication
I ... integrity
N ... non-repudiation

in the referenced example ... sufficiently strong authentication and
business rules to eliminate the usefulness of harvesting information
for fraudulent purposes can significantly mitigate the requirement for
having to hide the information (thru encryption).

... for some drift ... with respect to the N in pain ... the rsa
conference earlier this year had a track on logging and journalling ...
also helpful in catching insiders who might be doing bad things.

for additional drift ... there have been some number of threads
w/discussions about end-points almost always being more profitable
targets (for the crooks) than the wires between the end-points.

Encryption Everywhere? (Was: Re: Ho boy! Another big one!)

lynn@garlic.com wrote:
for additional drift ... there have been some number of threads
w/discussions about end-points almost always being more profitable
targets (for the crooks) that the wires between the end-points.

Secure Banking

"wayne.taylor2@gmail.com" writes:
I am a final year computing student, doing a project on secure banking,
as part of my project there is criteria for the simulation of
transactions, as well as designing an industry data model or likeness
that banks use.

there is payment network transactions for ISO 8583 financial industry
standard ... check the ISO international standards web site
http://www.iso.org

working in the ansi x9a10 financial standard working group, we were
charged with preserving the integrity of the financial
infrastructure for all retail payments ... and came up with x9.59
financial standard
http://www.garlic.com/~lynn/x959.html#x959

glen herrmannsfeldt writes:
IBM would sell it to you until a few years ago.

Now the oldest you can get is the 370 Principles of
Operations. Maybe about three times as thick, and
very reasonably priced.

you can sort of tell when they switched to script. PoP has a lot of
boxes for syntax and diagrams. Early versions were typeset.

then they moved the "red book" architecture manual (distributed in
3-ring red binder) to cms script. Depending on the flag used when you
invoked script ... it either printed the full architecture manual
(lots of sections discussing justification for instructions, varioius
trade-offs considered, engineering notes, etc) ... or just the PoP
(principle of ops) subset. During this early period ... they were
printed on 1403 ... and I guess pubs were generated using photo-offset
from the 1403 output.

You could get fairly high quality 1403 with really good ribbon.
However 1403 had small gaps printing the vertical lines (you could
come pretty close to solid veritical lines if you reset the 1403 to
8lines/in instead of 6lins/in ... but then you got somewhat smashed
characters and 88lines/page instead of 66lines/page) solid veritical
lines came back when they started using 3800 laser printer (instead of
1403).

and then "G", "M", and "L", invented GML in 69 ... and GML format
processing was added to cms script (and generalized markup language
was contrived to match their initials). later GML was standardized in
ISO as SGML ... which has since begat html, xml, fsml, saml, bpml, et
al.
http://www.garlic.com/~lynn/submain.html#sgml

Del Cecchi writes:
Now Lynn, you could get perfect lines on a 1403. We printed ALDs and
WPRINTS on 1403 all the time. I do believe a special chain was used.

del cecchi

PS an ALD was an "automated Logic Diagram" or schematic printed from
the netlist (BDL/S).

A Wprint was a print of the wiring on a chip or card printed as a page
or group of pages per level.

I have some vague recollection of special train that printed sideways
.. so top of page was at the side ... and multiple pages taped
together ... not across the perferations ... but across the sides
(after the tractor holes were removed). Also, the 1403 would be
switched from the normal 6 lines per inch to 8 lines per inch.

the first couple pages, prefaces, etc have been typeset ... then the
contents (on the 6th page) changes to 1403 font ... and the first box
diagram on page 9 of the introduction (11th page) has broken vertical
lines (as an aside ... the two 370 PoPs at the above site are typeset
and not 1403).

note also ... that the original script formating controls were
runoff-like "dot commnads" (before GML support was added). frequently
used were

.rc on
.rc off

and/or

.rc 1 on
.rc 1 off

which indicated revisions. revision codes are indicated by the
side-bar next the text. you can see this (also) on page 9 of the
introduction (11th page) on the top left ... in the paragraph that
starts "VM/370 is designed for ...". again the vertical bars are
broken

The revision is for virgil/tully ... 370s 138/148. The following page
(12th page) shows a diagram about a 138 configuration (again note the
revision bars on the side).

On the following page (13th page), there is an inserted drawing that
has continuous lines ... and also a different font. But if move on to
the next page (14th page, pg. 12 of the introduction), you have more
diagrams with broken vertical lines and also broken line revision bars
mentioning VMCF.

Anders Rundgren wrote:
If you have a scheme that with limited amount of money and user
inconvenince allows a citizen to access potentially thousands of e-gov
sites, without using TTPs I (and all e-govs in the World), would like
to hear about it.

Replacing the _indeed_ stale cert info with a stale signed account
claim would not have any major impact this scenario except for a few
saved CPU cycles.

SSL is by no means perfect but frankly; Nobody have come up with a
scalable solution that can replace it. To use no-name certs is not so
great as it gives user hassles

The issue isn't about TTPs specifically ... it is about PKIs and
certificates which have a design point of addressing specific issues
involving offline environments; aka the offline email environment of
the early 80s where somebody dialed up their local (electronic) post
office, exchange email and then hung up. They then possibly had some
first-time email from somebody that they had never communicated with
before ... and needed some method of finding out some information
about the sender. This is somewhat analogous to the "letters of
credit" from the sailing ship days.

The issue isn't about TTPs ... it is about trying to apply a solution
designed to compensate for being in an offline environment and not
having any recourse to the real information (including direct access
to the TTPs with the real informationn) to the emerging online
environment.

The contention is that in a real online environment, resorting to
stale, static information (designed to compensate for lack of access
to real online information in an offline enviornment) ... that the
stale, static information is of lot fundamentally flawed given timely
access to the current real information.

Somewhat as the online environment has become more and more pervasive
... the stale, stale, offline PKI paradigm has attempted to find
market niches in the low/no value areas where the relying party can't
justify the possible incremental cost of having access to real, timely
information. For instance a PKI certificate issued at some point in
the past year ... might claim that an individual has a specific bank
account. ALl other things being equal, would a relying party (about to
execute a high value transaction) prefer to have

1) stale, static information possibly a year old regarding the other
party having a specific bank account

or

2) timely, real-time response from the other party's financial
institution that the other party not only still has an active account
... but also that the account has sufficient funds to cover the
indicated transactions and furthermore the other party's financial
institution will stand behind the transfer of those fund.

One of the passing issues for PKI infrastructures moving into the
low/no value market segments ... they are less & less likely able to
charge any significant amounts for stale, static certificates in
support of no/low value operations.

The other issue is that the typical TTP PKI business model is contrary
to most standard business practices. In most standard business
practices, the relying party contracts directly with a TTP for timely
information ... creating some legal obligation on the part of TTP to
perform in specific manner. In the TTP PKI certificate-push model, the
key owner is contracting with the TTP agency (buying a certificate)
which creates various kinds of legal obligation between the TTP agency
and the key owner. The key owner then pushes that certificate to a
relying party .... where there has been no legal established
relationship between the relying party and the TTP agency.

another issue is that in the early 90s, you somewhat found TTPs
considering grossly overloading X.509 identity certificates with
enormous amounts of privacy information. the problem was that many of
the TTPs had no idea what purposes the X.509 identity certificates
might be put to use and/or with which relying parties (and what
possibly information requirements might unknown and unpredictable
relying parties might want).

then as you moved into the mid-90s, some institutions were starting to
come to the realization the x.509 identity certificates grossly
overloaded with personal information represented significant privacy
and liability issues. the result was somewhat retrenching to
relying-party-only certificates
http://www.garlic.com/~lynn/subpubkey.html#rpo

however, where a replying party registers a key-owner's public key in
some sort of database ... and then issues a stale, static,
relying-party-only certificate effectively only containing some sort
of account number (or other form of database lookup value) bound to
the public key. The key-owner was then to append such certificates in
all communication with the relying party. However, it is trivial to
demonstrate that the appending of such stale, static,
relying-party-only certificates to communication with the relying
party is redundant and superfluous i.e. the relying party already is
in possession of a superset of the information in the stale, static,
relying-party-only certificate.

the issue is that there were concerns about the integriy of the domain
name "TTP" providing real-time responses for real-time domain name
requests thru-out the world. the browsers would validate the
certificate and then compare that the domain name that the user typed
in matched the domain name in the certificate.

the problem for the CA SSL PKIs is that they had to validate the
information for a requested ssl domain name certificate with the
actual authoritative agency (aka TTP) for domain names ... the domain
name infrastructure. The CA SSL PKIs had to get various kinds of
identification information from the SSL domain name certificate
applicant and then perform the time-consuming, expensive and
error-prone task of attempting to match it with the identification
information on file with the domain name infrastructure as to the
owner of the specific domain name.

This, then also put the CA SSL PKIs vulnerable to possible integrity
problems with the domain name infrastructure. Somewhat from the CA SSL
PKI industry is a proposal to improve the integrity of the domain name
infrastructure by having domain name owners register public keys for
their domains. Future communication is then digital signed and can be
verified with the on-file public key .... not a certificate-less public
key operation
http://www.garlic.com/~lynn/subpubkey.html#certless

The other advantage for the CA SSL PKIs is that they can also require
that SSL domain name certificates also be digital signed. Then they
can use the onfile (certificate-less) public key to change from the
time-consuming, expensive and error-prone identification process to a
much simpler, less expensive and reliable authentication process (by
retrieving the onfile public key for validation of the digital
signature on the SSL domain name certificate request).

This however represents something of a catch-22 for the CA SSL PKI
industry. If they are able to retrieve trusted onfile public keys from
the domain name infrastructure for validating digital signatures (as
the root of the trust chain for SSL domain name certificates) ... then
it would be technically possible for everybody in the world to also
retrieve trusted onfile public keys in their real-time, online domain
name resolution requests. If everybody in the world could do real-time
retrieval of onfile public keys from the domain name TTP ... for
verifying digital signature in communication with servers that they
are contacting ... then it obsoletes the requirement for having SSL
domain name certificates.