How long before Microsoft goes the way of DEC (and in part, IBM)?

Michael Wojcik <mwojcik@newsguy.com> writes:
A joint venture from IBM, SCO, Sequent, and others to create a single
flavor of Unix for a variety of platforms. The impression in the late
1990s was that Windows was doing well on smaller machines partly
because the Unix market was too fragmented.

from above:
The founding of the organization was largely seen as a response to the
collaboration between AT&T and Sun Microsystems on UNIX System V Release
4, and a fear that other vendors would be locked out of the
standardization process. This led Scott McNealy of Sun to quip that
"OSF" really stood for "Oppose Sun Forever." The competition between the
opposing versions of UNIX systems became known as the UNIX wars.

from above:
By the early 1990s, the major Unix players had begun to realize that the
standards rivalries known as the Unix wars were causing all participants
more harm than good, leaving Unix open to emerging competition from
Microsoft. The COSE initiative in 1993 can be considered to be the first
unification step and the merger of the Open Software Foundation (OSF)
and X/Open in 1996 as the ultimate step in the end of those skirmishes.

from above:
It was formed when X/Open merged with the Open Software Foundation in
1996. The Open Group is most famous as the certifying body for the UNIX
trademark, in the past the group was best known for its publication of
the Single UNIX Specification paper, which extends the POSIX standards
and is the official definition of UNIX.

sam@PSCSI.NET (Sam Siegel) writes:
Every state has laws regarding the retention of data related to the conduct
of business. The amount of time is typically 3 to 7 years. No keeping the
receipts (or copies thereof) could create legal problems as well.

... in
the credit card slip case ... both the consumer and the merchant have
copies.

the electronic record of the transaction data is kept (by the issuing
bank) ... question of what wasn't kept was the merchant's paper slip
copy with signature &/or electronic image of same.

the issue was resolving (potentially legal) disputes ... what side has
burden of proof and what kind of proof. merchant not having the signed
slip effectively resolves on behalf of the consumer (having the signed
slip doesn't mean that it resolves on behalf of the merchant ... the
merchant still has to show that it is the consumer's signature).

other items are like how long does consumer have to dispute items.

in any case, standard "reg. E" places burden of proof on merchant

one of the interesting flyers in the 90s was proposal about digitally
signed, public key transactions for internet transactions. consumers
would pay $100/annum for their digital certificate ... and in effort to
sweeten the deal for merchants to install the technology ... the burden
of proof (in disputes) for public key transactions ... would be switched
from merchant to consumer. the question was raised ... why would the
consumer pay $100/annum for something that would switch the burden of
proof to them.

there has been some amount of churn in the UK with their chip payment
card about something analogous ... where the dispute burden of proof is
now effectively on the consumer.

Korean bank Moves back to Mainframes (...no, not back)

lynn@GARLIC.COM (Anne & Lynn Wheeler) writes:
however, by at least the early 90s, there were cases of compromised
end-points recording valid information (done during the process of valid
transactions). these operations tended to be more large scale wholesale
operations ... getting information for tens of thousand (or millions)
... rather than a few tens.

frequently these are external attachments specifically targeting
magstripe ... however, there have been lots of cases where collecting
technology has been installed inside the end-point (pos terminal or atm
cash machine). cases have included modification of machines already
installed, replacing machine with modified machine, installing
modification at time of manufacture ... or even criminal front
organization manufacturing machines and selling them on open market (or
on gray market ... copy of some other vendors machine).

criminal front manufacturers have even sold such machines "at cost"
(undercutting competition) because they are planning on making up the
profit with fraudulent transactions.

lynn@GARLIC.COM (Anne & Lynn Wheeler) writes:
there has been some amount of churn in the UK with their chip payment
card about something analogous ... where the dispute burden of proof is
now effectively on the consumer.

there was recent case in the UK where an individual needed a copy of the
ATM machine video recording to prove that they didn't make the
withdrawal ... since the bank wasn't able to find the recording ... it
was decided in favor of the bank (and against the individual).

there have been comments that care taken regarding video recording might
be significantly different if the bank was required to show the video
recording to prove it was the individual (as opposed to the individual
getting a copy from the bank to prove it wasn't them).

Pat Farrell <pfarrell@pfarrell.com> writes:
Isn't this just 20-20 hindsight? DEC was heavily into the whole OSI/ISO
process, driving parts of the spec. It was not clear in the early 80s
whether TCP/IP or OSI/ISO would actually scale to world wide.

The US Government mandated OSI/ISO in about 85.

Both the Government and DEC were shown to be wrong, but I don't agree
that it was obvious at the time.

I would claim it was very obvious ... OSI/ISO was "old technology",
high error copper wire and private networks ... somewhat 60s&70s
arpanet ... one of the things learned from arpanet was the
need for internetworking ... which OSI/ISO didn't have.

interop '88 was supposedly at least venue for demonstrating ip
interoperability ... but there were lots of vendors with booths showing
off bits and pieces of OSI/ISO technology (possibly because of the
federal gov. gosip mandates) ... misc. past posts mentioning
http://www.garlic.com/~lynn/subnetwork.html#interop88

i had some tcp/ip stuff in booth at interop '88 ... but it was a
different corporation (than the one that was paying me).

another scenario from the mid-80s was frequent comments about ISO not
requiring demonstration of actual implementation for standard (this
was with regard to observations by people attempting full
implementation was nearly impossible as well as impractical
... resulting in lots of people then doing apoligies that it was
purely a model and nobody was expected to actually do an
implementation). By comparison, it was pointed at that IETF requires
interoperable implementations for proceeding to standard.

the other comment about gov, DEC, as well as certain sectors of IBM (as
well as others) ... was somewhat similar to recent observation about
people at rarified levels believing if they say it is so ... then it
will be so ... recent reference on who to blame for IBM's asciii v. ebcdic
situation, referenced in this post
http://www.garlic.com/~lynn/2009s.html#63 CAPS Fantasia
and:
EBCDIC and the P-BIT (The Biggest Computer Goof Ever)
http://www.bobbemer.com/P-BIT.HTM

I was on the XTP technical advisory board ... and there was an attempt
to take XTP to x3s3.3 (iso chartered us body responsible for standards
corresponding to OSI level 3&4) as HSP (high-speed protocol). x3s3.3 was
under ISO mandate that no standards could be done for anything that
didn't correspond to OSI. HSP was rejected because if failed to
correspond to OSI for at least:

Thanks very much for letting us know Google was down. The system is now
back up.

The whole Google team was on vacation at the Burning Man festival for Labor
Day weekend. Unfortunately, we were out of email, web, and cell phone
contact due to the remote location. We were hoping the system would stay
up, but what seems to have been a loose disk cable caused our system to go
down. We plan to add additional staff and redundant machines to improve
the reliability in the future.

Also, we plan to replace the current index with a new one hopefully this week.

Thanks for using Google, and let us know if you have any other comments.

Like all the world's pivotal innovations, Alta Vista started life on
the back of a napkin. Just about a year ago, Louis Monier and Paul
Flaherty, both engineers at Digital's Palo Alto research labs, sat
down to lunch and got talking about big numbers. The newspapers were
full of Internet stories at the time, and editorials were predicting
that the amount of information online would soon be too much to
imagine, much less quantify. Meanwhile, Digital had just launched the
Turbo Laser, an Alpha-based server with a 64-bit address bus -
theoretically capable of addressing 17 billion gigabytes.

"We just had this crazy idea," recalls Monier, "of putting two and two
together." Twelve months have since passed and the idea - crazy or
otherwise - has become a reality. Alta Vista is now online at
http://altavista.digital.com. Its Turbo Laser, an eight-processor
AlphaServer 8400 5/300 with a massive 6GB memory and 210GB of RAID,
provides the largest full-text searchable index currently available on
the Web.

Scooter the spider

Monier, now principal engineer on the project, spent much of last
summer fashioning a web crawler capable of retrieving the contents of
the entire Net. His scratch-built design, called Scooter, is a
multi-threaded spider capable of retrieving as many as 1,000 documents
simultaneously. It runs from a single Alpha workstation (a DEC 3000
Model 900 with 1GB of memory and 30GB of RAID) at Palo Alto, and has
been designed to be a good web citizen - it obeys the Standard for
Robot Exclusion (see D3 p39, last month) and avoids hitting the same
site repeatedly.

Prototype index

While work on Scooter was under way, Monier hooked up with Mike
Burrows, a fellow researcher at Palo Alto. Burrows had developed
a prototype indexing technology as part of another project, and
this proved crucial. Monier describes it as, "Probably the
fastest and best indexing technology in the world." The indexer,
which runs on the Turbo Laser, can handle about 1GB of text per
hour, building a database that preserves the full text of the
pages it has read. This is the main bottleneck of the whole
process, and Scooter could actually run much faster. The indexer
has so far processed around 100GB - retrieved from around 22
million pages of text. The resulting index is around 33GB. "The
fact that we provide a full-text search is the biggest factor in
keeping it so big," Monier says. A full-text index allows a
number of techniques not possible by other means, such as

edjaffe@PHOENIXSOFTWARE.COM (Edward Jaffe) writes:
The story, as told by John Ehrman, is that the POO got so big, it
broke the book build software and nobody at IBM has the time,
inclination, or knowledge to fix it. :-(

it would be fun to get a look at it to fix generation of html ... POO
has been subset of the architecture book ... which has been twice as
large ... started out as cms script file with conditionals ... that
command line arguments to the cms script command would either format the
full document or just the POO subset.

last spring I had done a lot with the transcripts of the pecora hearings
(senate banking hearings in the wake of '29 crash ... leading up to
Glass-Steagall) ... with a whole lot of cross-indexing and generated
loads of hrefs. the original scanned transcripts were six volumes with
2345 pgs total and 20 volumes with 9296 pgs total.

the original document wasn't the best ... so the scan wasn't outstanding
and several places the OCR of the scanned pages is very low quality ...
so the individual HTML'ed pages from the OCR, periodically have a lot of
garbage; as a result I put in each HTML'ed page a HREF reference back to
the corresponding page in the scan'ed document (whole thing is under two
gbytes, most of which are the original scanned files).

by comparison, Z -07 POO PDF file says 1344 pages ... for the heck of it
I just started a "save as text" ... which is going quite slow ... a lot
of the formating & figures are lost in "save as text"

Happy DEC-10 Day

glen herrmannsfeldt <gah@ugcs.caltech.edu> writes:
There was a manual on the PDP-10 hardware (instruction formats,
opcodes, etc.) It was way too expensive, though, for me to buy.
IBM kept their Principles of Operation manuals pretty reasonably
priced until very recently.

but softcopy is readily available ... realtime thread in ibm-main about
latest softcopy is pdf only ... online HTML seems to be discontinued
http://www.garlic.com/~lynn/2010b.html#6 Bookshelves under BookMangler

Mark Crispin <mrc@panda.com> writes:
When people ask, "what were they thinking" when they look at the email
system in use at the White House, it comes back to GOSIP. Both
Clinton and Bush inherited a true nightmare which political detractors
have attempted to present as evidence of conspiracy when it was really
the lingering effects of GOSIP.

but in early/mid 80s, executive branch and several other organizations
was PROFS. there is folklore that part of ollie's problem was the
extensive profs backup procedures (actually general backup, but what the
heck).

there were some rather derogatory labels applied to old-time telco
mentality that were attributed to have been heavily involved with
OSI/ISO standard. one of the more polite term referenced in this older
post
http://www.garlic.com/~lynn/2003j.html#73 1950s AT&T/IBM lack of collaboration?

Happy DEC-10 Day

glen herrmannsfeldt <gah@ugcs.caltech.edu> writes:
Though there are stories about features that were removed from
S/370 because they were too hard to implement on some models.

i periodically rail about dropping r/o segment protect and various other
features from 370 ... when 370/165 engineers ran into various problems
and delays retroffiting 370 virtual memory (field install/upgrade from
165 to 165II)

(vm370)cms had already been reorged to utilze r/o segment protect and
various other "new" features ... when it was decided to drop the
features to gain back six months in 370 virtual memory (& 165) announce
schedule

part of this dates back to doing the modifications in cp67 to provide
370 virtual machines including full virtual memory architecture ... this
was running and in regular use a year before the first engineering 370
(145) virtual memory hardware was operational.

cp67 virtual 370 was also early distributed project between endicott and
cambridge ... using (internal) network connections.

some mention of wecker & decnet ... also 16% of vm370 burlington mall
development group had gone to dec ... rather than moving to pok to
support mvs/xa
http://www.garlic.com/~lynn/2006m.html#21 The very first text editor

tcp/ip was (internetworking) technology basis for modern internet,
nsfnet backbone was original (internetworking) operational
implementation, and CIX was business basis for modern internet. misc.
past email related to nsfnet backbone
http://www.garlic.com/~lynn/lhwemail.html#nsfnet

we had an internal high-speed backbone and were working with various
parties that were working up to nsfnet backbone. We could demonstrated
T1 and higher speed operation ... and believe that was part of reason
that the nsfnet backbone RFP specified T1. however, for various internal
politics, we weren't allowed to bid on nsfnet backbone. The director of
NSF tried to help the situation by writing a letter to the corporation,
copying the ceo ... but that actually aggravated the internal politics
(little statements that what we already had running was at least five
years ahead of all bid submissions). The winning nsfnet bid ... actually
put in 440kbit links (not T1) ... but they had T1 trunks with telco
multiplexors (possibly as effort to meet the letter of the rfp). We made
some snide references that they might even claim T5, since possibly some
of the T1 trunks possibly were in turn multiplexed by telco over T5
trunks.

the next round was nsfnet t3 backbone rfp. there was internal corporate
gathering to answer the rfp ... i was the redteam ... there was 20-30
people from half dozen or so labs from around the world on the blue
team. at the final review, i presented first ... then the blue team
presentation started. five minutes into the blue team presentation ...
the person running the review pounded on the table and said he would lie
down in front of garbage truck before he let any but the blue team
proposal go forward.

POO in the past was cms "script" ... that was actually subset of the
architecture book (at one time distributed in RED 3-ring binder ... and
called the "red book" ... different from the current public manuals
called "red books"). cms command line options to script command would
generate/format the full "red book" ... or just the POO subset. the
advantage of having a single document was that it helped keep the
information in sync. the non-POO part tended to be as large as the POO
subset ... and included things like "engineering notes" (implementation
considerations on different machines) and detailed instruction
justification description.

Korean bank Moves back to Mainframes (...no, not back)

Howard Brazee <howard.brazee@cusys.edu> writes:
And the IS community has to realize that any solution is flawed if it
requires these salesmen and/or everybody who does on-line shopping to
be experts in security.

we had been called in to consult with a small client/server startup that
wanted to do payment transactions on their server ... the startup had
also invented this technology called "SSL" they wanted to use. Part of the
effort was deploying something called a "payment gateway" (we
periodically claim is the original SOA) ... misc. past posts
http://www.garlic.com/~lynn/subnetwork.html#gateway

the effort is now frequently called "electronic commerce". given the
ease that crooks can harvest account numbers and use them for fraudulent
transactions ... I drew up a list of things required for commerce
servers enabled for payment transactions ... like all individuals
involved in any way needed to have FBI background checks (type required
of individuals in sensitive positions at financial institutions). part
of this was that long term numbers claim that insiders are involved in
70% of such events.

somewhat as the result of the work on "electronic commerce", in the
mid-90s, we were invited to participate in the x9a10 financial standard
working group which had been given the requirement to preserve the
integrity of the financial infrastructure for ALL retail payments. as
part of that activity there was detailed end-to-end threat &
vulnerability studies done of different kinds & modes of retail
payments.

x9a10 financial standard working group produced a payment standard that
slightly tweaked the paradigm and eliminate the threat and vulnerability
from having account numbers and/or other transaction information
revealed ... for ALL retail payments (point-of-sale, face-to-face,
unattended, credit, debit, internet, ACH, stored-value, aka ALL).
http://www.garlic.com/~lynn/x959.html#x959

x9.59 financial standard didn't do anything about hiding or encrypting
the information in transactions ... but eliminated the ability of the
crooks being able to use that information for fraudulent transactions.

Now the major use of "SSL" in the world today is this earlier
"electronic commerce" work to hide account numbers and transaction
details. A side effect of x9.59 financial standard eliminates the need
for that hiding and therefor the major use of "SSL" in the world today.

Larrabee delayed: anyone know what's happening?

"Steven G. Johnson" <stevenj@alum.mit.edu> writes:
What you're referring to is that at one point we found performance
improvements (I believe on an IBM RS/6000, if I remember correctly),
by inserting padding into the middle of a *one-dimensional* array
(again to avoid cache conflicts), but it didn't seem like there was a
sane interface for specifying a 1d array with padding in the middle.

we were brought in to look at issues with one of the major airline
res. systems .... first looking at "routes" (basically finding flt(s)
from origin to destination) ... that accounted for 25% of workload on
large collection of mainframes.

i redid the paradigm and implementation on rs/6000 ... as part of trying
to get at least ten times performance improvement. Initial pass got
twenty times performance improvement ... then some careful tuning of
cache line considerations got another factor of five times improvement
(100 times improvement overall). I added in bunch of new features
(including collapsing several human interactions into single operation)
... which then brought overall performance of that single interaction
back down to about ten times (but it was eliminating several additional
interactions/transactions).

Anne & Lynn Wheeler <lynn@garlic.com> writes:
2nd hand tale of some of the competitors testimony in gov./ibm
anti-trust trial ... all computer manufacturers knew by the late 50s
that the single most important factor in the market place was to have a
compatible architecture across the whole machine line ... and they
weren't able to get all the different plant managers to toe the line
... different plant managers responsible for different models would do
various optimizations for the particular technology that they were
using. Only Watson prevailed in forcing all the plant managers
(responbile for the different models) to toe the 360 architecture
comptability line.

so some number of business reporters from the auto show this week
commented that the us auto industry is making statements that they need
to be agile and adaptable to react to changing consumer preferences and
market conditions in order to compete with foreign competitors.

in the 90/91 time-frame the us auto industry had C4 task force meetings
about how to become more profitable and compete with foreign competitors
and invited some number of technology vendors to participate. they
detailed that big inhibitor was the long product cycle (from idea to
rolling off the line) of 7-8 yrs ... when the foreign competitors had
cut their cycle to 3-4 yrs and looked to be in process of cutting that
in half (18-24 months). the industry had a bunch of details on what was
needed ... as well as looking to technology vendors for help in
improving the process as well as cutting the elapsed time. One of the
examples used was corvette design which tended to have very tight
space/size tolerances ... and between initial design and actually
starting to manufacture ... several components would have changed size
and shape ... and no longer fit (requiring expensive redesign & delay).

I chided some of the pok/mainframe attendees about it might be difficult
for them to offer advice, since at the time, they were in similar product
cycle situation.

In any case, while in the C4 meetings it was possible for them to
clearly articulate all the problems and what all the changes that were
required (including being more agile and adaptable), they didn't seem to
be actually able to do anything ... all the major stakeholders seemed to
have vested interest in preserving the status quo.

misc past posts mentioning Boyd &/or OODA-loops (OODA-loops being one of
the best paradigms for characterizing agile and adaptable ... especially
in competitive situations; in the past, I had sponsored Boyd's briefings
at IBM):
http://www.garlic.com/~lynn/subboyd.html

security and online banking

Steve Hayes <steve@red.honeylink.blue.co.uk> writes:
I don't think they will be using any sort of public key system for this for
three reasons:

1 - implementing public-key encryption and decryption needs more processing
power than they would want to put on a chip-and-pin card.

2 - public key would only be useful if the system is both encrypting and
decrypting. An entire block of encrypted data would have to be input to the
handheld device by the user and perhaps copied back. With RSA, this would be
well over 100 digits. I believe there are other public key systems that are
better but not enough to make this practical.

3 - public key techniques aren't needed for this job.

actually quite a bit of work was done on that in the 90s, as part of the
x9a10 financial transaction standard working group (had been given the
requirement to preserve the integrity of the financial infrastructure
for ALL retail payments).

we had been called in to consult with a small client/server startup that
wanted to do payment transactions on their server ... the startup had
also invented this technology called SSL they wanted to use. Part of the
effort was deploying something called a "payment gateway" (we
periodically claim is the original SOA) ... misc. past posts
http://www.garlic.com/~lynn/subnetwork.html#gateway

the effort is now frequently called "electronic commerce". given the
ease that crooks can harvest account numbers and use them for fraudulent
transactions ... I drew up a list of things required for commerce
servers enabled for payment transactions ... like all individuals
involved in any way needed to have FBI background checks (type required
of individuals in sensitive positions at financial institutions). part
of this was that long term numbers claim that insiders are involved in
70% of such events.

somewhat as the result of the work on "electronic commerce", in the
mid-90s we were invited to participate in the x9a10 financial standard
working group. as part of that activity there was detailed end-to-end
threat & vulnerability studies done of different kinds & modes of retail
payments. the ALL was things like point-of-sale, attended, unattended,
credit, debit, stored-value, gift card, contact, contactless, internet,
wireless, transit turnstile, aka ALL (as well as most online banking
transactions). The transit industry had requirement that the operation
be able to be performed in the limited power (of contactless) and
elapsed time (small subsecond) of transit turnstile ... and also be very
inexpensive.

I had semi-facetiously joked that I would take a $500 milspec part and
aggresively cost reduce by 2-3 orders of magnitude while improving on
the security (while also being able to satisfy the transit turnstile
contactless power and elapsed time requirements ... as well as cost
requirements).

and also eliminated the requirement for hiding the account number and
transaction detail.

Now, the largest use of SSL in the world today is the previous work
related to "electronic commerce" for transaction encryption as part of
hiding account number and transaction detail ... however, x9.59
standard eliminates the need to hide that information ... and so would
also eliminate the major use of SSL in the world today.

hancock4 writes:
The descendants of IBM's System/360 architecture, now called the "Z"
series, will very likely hit their 50th anniversary in 2014. There's
still a huge installed base in service that isn't going anywhere,
despite the ongoing conversions to 'client-server' processing.

But long will it live on? I don't think anyone would've expected in
1964 that it would've lasted this long. Will it make it to its 75th
anniversary?

in the 90s ... there was big effort (billions spent just by various
institutions in manhatten) to rewrite major applications to eliminate
the overnight batch window and move to straight-through processing
... leveraging large numbers of "killer micros". There were
large disaster ... since the software technology being used introduced
factor of 100* greater overhead (compared to the cobol batch), totally
swamping any aniticipated thruput improvement.

a couple yrs ago, I was involved in proposal to industry group for a
new effort at straight-through processing ... but using technology
that was possibly only 2-3 less efficient (than cobol batch) in
achieving highly parallel operation for straight-through processing
(taking each individual transaction to completion ... rather than
deferring major portion of transaction to overnight batch processing,
easily achieving all of the original objectives). the response was
nearly scalded cat reaction ... because so many organizations had been
so badly burned by the failed 90s efforts.

for any change ... appears to at least require a new generation ... as
well as demonstratable cost/benefit .... i.e. cost of rewrite/move has
to demonstrate ROI benefit compared to current implementation (including
confidence in any new implementation being at least as dependable as
implementation that has been running reliably for 20-30 yrs).

90s was also period of growth and they needed both new function as well
as additional capacity (part of motivation for change). it may be some
time before such a growth period returns. growth is going on in other
areas of the world ... which have opportunity for doing new
implementations from scratch and not having any consideration regarding
legacy stuff.

How long for IBM System/360 architecture and its descendants?

Stephen Wolstenholme <steve@tropheus.demon.co.uk> writes:
I've no idea how long 360 architecture will live. That sort of
prediction is not possible. I do know that 'client-server' and
mainframe architecture are not mutually exclusive.

terminal emulation contributed to early uptake of PC ... but later as
PCs & PC software became more sophisticated, terminal emulation started
to represent an inhibitor. senior technical person from disk division
even had a presentation at annual world-wide internal communication
conference that started out saying the head of the communication group
was going to be responsible for demise of the disk division.

the issue was that terminal emulation was starting to be such a
stranglehold on data into/out-of mainframe datacenter ... that data was
starting to leak out of the datacenter at increasing alarming rate to
reside at locations outside of the (mainframe) datacenter.

in the time-frame we had came up with 3-tier architecture and was out
pitching it to customer executives ... and taking lots of barbs from the
communication group (part of preserving the terminal emulation
install base)
http://www.garlic.com/~lynn/subnetwork.html#3tier

part of the concepts came from dealing with NCAR ... and using mainframe
as file/data server (early NAS/SAN) for supercomputers ... i.e.
technologies other than terminal emulation ... allowing mainframes
to have efficient & high-thuput connectivity ... some this was related
to my high-speed data transport project
http://www.garlic.com/~lynn/subnetwork.html#hsdt

in mid-90s ... there were various presentations by dial-up online
banking organizations. the consumer banking organization were talking
about moving to the internet (& using SSL) ... a major reason was that
it offloaded the enormous customer support costs for (serial-port)
dialup modems to the internet service providers (presentations about
some operations having libraries having greater than 60 different
software drivers for their dialup banking software, and large customer
call center support costs). side-issue was that the ISPs could amortise
all of that support across a much larger market ... rather than just
dialup online banking (and as it turns out, the much larger internet
market, prompted vendors to work out lots of the serial-port dial-up
modem problems and started including tested support in original product
... instead of it being an aftermarket problem).

however, the cash-management, commercial/business dialup online
banking operations said that they would never move to internet (even
if they got 128-bit SSL instead of 40-bit SSL) ... because of a long
list of threats and vulnerabilities; nearly every possibly kind of
exploit that has occured in the past 15 yrs was already on their list
in the mid-90s (as reasons for not moving to the internet).

when Obama came into office, he made several statements about taking
back the gov ... because so much of the gov. had been outsourced to
vendors ... who placed their own interests ahead of the gov.

This goes along with past article about Success of Failure ...
vendors finding that it is more profitable to have a series of failed
projects (especially noticeable are whole string of failed fed. gov.
IT/dataprocessing modernization projects) than having a success.

the folklore was that in the mid-90s there was still a gap being able to
claim the budget had been "balanced". this was in the period when they
were auctioning off airwaves for large sums of money (also before the
internet bubble burst). the strategy to close the gap was congress
passes legislation to mandate HDTV ... the HDTV digital transmission
uses less airwaves than the old analog ... then the free'ed-up airwaves
are auctioned off for enormous amounts of money ... sufficient to close
the gap for balanced budget.

Steve Hayes <steve@red.honeylink.blue.co.uk> writes:
The bank sends a challenge of some number of digits which the user types
into the handheld device. Enough digits are used to make it unlikely that
the same challenge would ever be sent twice.

finread from the later part of the 90s was to address significant
portion of the things that the cash management/commercial dialup online
banking operations highlighted as major vulnerabilities ... including
whole slew of end-point compromises of PC (in some sense ...
countermeasure was to move the end-point to the finread device).

finread got caught up in the disaster of some hardware token deployments
from the period and the resulting widely spread opinion in the financial
industry that chipcards weren't practical in the consumer market.

the deployment disasters turned out not to be with the actual hardware
tokens ... but with the card acceptor devices (card readers) that were
given way as part of the programs. it appeared that they got a lot of
obsolete serial-port devices (for the give-away) and ran into the
enormous customer support issues/problems that had earlier motivated
dial-up online banking to move to the internet (apparently the
ephemeral financial infrastructure institutional knowledge about
serial-port support problems had evaporated in the few short years
between move of consumer online banking from proprietary dial-up to
the internet ... and the smartcard deployment programs).

Peter Flass <Peter_Flass@Yahoo.com> writes:
I argue that there's no reason for a RISC architecture to be full of
"gotchas" and exceptions to trip up the programmer, that's just sloppy
work by the designers.

future system was going to completely replace 360 ... and was as (at
least as) different from 360 as 360 had been from prior generations.

in mid-70s technology meeting with risc presentation ... there were lots
of references that whenever there was trade-off between hardware
complexity and software complexity ... decision was hardware simplicity
... and software complexity would be used to enable simpler hardware.

for instance there was no cache consistency between I-cache
(instruction) and (store-in) D-cache (data). This resulted in scenarios
where a loader would be operating on instruction image brought into
memory ... and any modifications would appear in D-cache ... and
wouldn't necessarily be in main memory for fetch by I-cache.

To make it work ... loader had to execute instructions that forced
modified data from D-cache to memory and if the corresponding memory
locations were in the I-cache, they were invalidated.

A whole lot of technology went into the 801/risc pl8 programming
language ... which was designed to compensate for 801/risc hardware
shortcomings/simplicity.

I don't anybody ever anticipated that C would ever be used for
programming risc processor.

... and consistancy checking of RFC process. some of it started showing
up as section 6.10 in STD1 (starting about 1600 or so) for a time.

i expanded to doing filelist of several dozen sites on regular basis and
getting copies of all sorts of stuff from late 80s and early 90s
... some of which I still have ... for example following files from
policies directory:

William Hamblen <william.hamblen@earthlink.net> writes:
At the moment OASDI taxes are roughly 15% of wages, and pensions paid
are roughly equal to taxes collected. If nothing changes on the
pension side, the tax side will have to be increased to 30% of wages
quite soon. The Congress could break the connection between wages
earned and pension paid and make the pension means tested in order to
avoid big tax inceases. What that will do for retirement saving I
can't predict. A lot of people will figure why save anything if the
government will just reduce benefits.

there was article that baby boomer generation is four times larger than
the previous generation and that the generation following the baby
boomers is only half as large ... as the baby boomer bubble moves into
retirement ... the ratio between the following working generation to
retiree (baby boomer) generation is reduced by factor of eight times
(1/8th ratio in the following generation in prime earning to the number
of retirees).

during the baby boomer prime earning years ... the SS collections was
larger than the SS payouts ... the excess going into general fund
(folklore is bottom desk drawer somewhere in west virginia with the
IOUs) ... effectively turning SS into "pay as you go" retirement fund
(as opposed to fully funded retirement) ... and used for underwriting
general federal expenditures.

with a change of factor eight in the ratio of those paying in to those
receiving benefits ... there is some implication that the 15percent will
have to increase 120percent ... to maintain the same level of benefits
(because the increase in the ratio of those receiving benefits to the
number of working and paying taxes).

there had been some past legislation from congress requiring
corporations to move to fully funded retirement plans (i.e. money paid
in as person worked was what was used to pay their benefits) ... some
number of corporations that were on pay-as-you-go ... found that it
required huge increase (they were also leveraging the baby boomer worker
generation was so much larger than the retired generation). some number
just declared bankruptcy and threw their workers into the gov. pension
plan.
http://www.pbgc.gov/and
http://www.pionline.com/article/20090406/PRINTSUB/304069981

there were some articles that CEOs made their stock value bonus
objectives solely based on that change to corporate assets.

the other issue is that there is lots of reports that the following
generation is less well educated and less qualified for well-paying-jobs
... so the base income being taxed is drastically reduced ... in order
to maintain same level of benefits ... it may be necessary to increase
SS tax to 250% (or more).

Happy DEC-10 Day

Morten Reistad <first@last.name> writes:
There was a long transition centered around jan 1st 1987; and that was
the intended cut date. It slipped, but not by much.

The Battle of the Commercial Internet happened january 1991 to january 1992,
otherwise known as the "cix wars". It was about universal connectivity, and
not having cencorship at the core of the Internet, in the form of AUPs,
traffic priority etc.

the other scenario for the AUPs was the telcos trying to solve the
chicken&egg scenario. fiber &/other new technology drastically increased
available bandwidth ... but telcos were finding impossible to solve
problem ... to significantly increase the bandwidth use ... the price
per bit had to be drastically reduced (even to encourage the
developement new generation of bandwidth hungry applications). The
telcos have significant fixed run rate ... drastically reducing
bandwidth useage rates would have the telcos operating at large loss for
several years before new generation of bandwidth hungry applications
appeared.

the nsfnet backbone T1 RFP was $11.2M ... however, estimates are that
the amount of resources that commercial entities put into the backbone
was at least four times that amount. the scenario is that the telcos
could have a restricted use free bandwidth sandbox as incubator for next
generation of bandwidth hungry applications ... w/o affecting their
commercial revenue streams.

that is somewhat separate from some of the technology choices put into
that T1 RFP (i.e. 440kbit links ... not real T1 & higher-speed links
that we were running) ... recent post
http://www.garlic.com/~lynn/2010b.html#10 Happy DEC-10 Day

Morten Reistad <first@last.name> writes:
The 2000 boom puzzled us as financial analysts. We applied all the bubble
models we had, and they all told us that this should have crashed already.
But it kept going, going, going. We still don't know what kept it fuelled
for so long. There must have been a lot (a _LOT_) of external liquidity
added that excaped the models. I still wonder what happened.

there appeared to be a community of investment bankers that ran IPO
mills, put in investment, two years of hype ... and then IPO ... maybe
$20m in on the front in, $2b out on the backside. there was even benefit
if the new company never actually succeeded ... since that left the
market still open for the next IPO.

lots of people buying the stock never actually understood anything about
the technology of the company they were buying into ... they just got
caught up in the hype. being caught up in the hype of something new &
not understood ... resulted in lots of people simply ignoring
fundamentals (some flavor of "emperor's new clothes")

Pat Farrell <pfarrell@pfarrell.com> writes:
We ran into that much earlier. There were a number of "block mode"
terminals that did forms processing locally, and dumped in all of the
characters in a single stream, at whatever the modem had. Using them
greatly improved the user experience for boring data entry usages. DEC
had a very hard time understanding why anyone would want it, let alone
expect it to work.

topaz/3101 "mod2s" ... w/o the ROM upgrade they were effectively very
similar to other glass-teletypes, "mod2s" added "block mode" ... aka
somewhat more like 3270s (had some early mod1s and got rom image from
plant site to burn new mod2 roms).

for home office, i upgraded cdi miniterm at 300 baud ... to 3101 at 1200
baud.

Happy DEC-10 Day

Eric Chomko <pne.chomko@comcast.net> writes:
A company as big as DEC doesn't die by one thing. IBM had similar
problems as DEC but recovered. DEC could have recovered but made bad
decisions to get into trouble and made bad decisions trying to get out
of trouble. Give IBM credit for making good decisions to get out of
trouble.

also IBM had much broader market. A lot of DEC was in the vax/vms
mid-range market ... which endicott 43xxs also sold into. there was big
explosion in the mid-range market starting in the late 70s and going up
to the mid-80s. 43xx was very sales to vax ... except 43xx also had some
huge corporate sales with machines being ordered in several hundred at a
time (sort of precursor to departmental servers). endicott expected
later 43xx models in the mid-80s to see similar sale volumes that the
earlier models saw ... but by that time the mid-range market was moving
to workstations and large PCs.

in jan '83 when arpanet was making transition to internet (having
approx. 100 nodes or 255 hosts ... since arpanet counted IMP network
nodes as separate from the number of attached host processors) .. the
internal network was getting close to passing 1000 nodes/hosts ... lots
of the rapid growth being 43xx machines ... misc. past email mentioning
43xx
http://www.garlic.com/~lynn/lhwemail.html#43xx

one of the issues about internet passing internal network number of
nodes/hosts in the mid-80s ... was the appearance of workstations
and PCs as nodes ... while the internal network was forced to
use terminal emulation for such machines.
http://www.garlic.com/~lynn/subnetwork.html#emulation

Happy DEC-10 Day

jmfbahciv <jmfbahciv@aol> writes:
It was worse than what you describe. TW was expected to write the
software of new disks without having one on the system (because
product management had promised the first ones to customers;
product management had to be retrained every damned time that,
in order to deliver the hardware, the software had to exist and
that the software couldn't exist until we had the hardware
installed on our system for the months required to write, test,
and load test the software.

i was being allowed to play disk engineer in bldgs. 14&15 ... because I
would help them out with stuff ... and they would let me play with some
of their stuff. as part of validating disk operations ... disk labs
... would get early engineering processors ... like maybe 3rd or 4th
... as soon as the processor engineers had extra.

at one point i was doing some work on (disk) engineering 4341 for
endicott 4341 software & performance groups ... since I had better
access to 4341 (than people in various endicott 4341 groups).

the major internal network technology started out on cp67/cms and moved
to vm370/cms ... it was relatively nicely layered ... even with a kind
of gateway functionality in every node.

the major mainframe batch system networking appeared to have been
inherited from HASP (where the networking source code changes had the
identifier "TUCC" in cols. 68-71). HASP/JES had one byte index (255
entry) table to define things. It started out being used for "psuedo"
unit record devices (printers, punches readers) ... and typical system
might have 60-80 such entries defined. The networking code then utilized
the remaining entries to define network nodes i.e. limiting things to
under 200 definitions. So effectively for nearly the whole lifetime of
the internal network ... MVS/JES nodes were typically limited to edge
node (not only would they discard traffic for destination nodes that
weren't in their table ... but they would also discard traffic when the
originating node wasn't their table).

the other problem was that they had jumbled header information
... mixing up JES control information with network control information.
In fact, traffic between JES systems at different release levels was
notorious for crashing JES & bringing down production MVS systems.

the main internal networking implementation (on cp67/cms & then
vm370/cms) had to develop a set of HASP & then JES "drivers" ... that
emulated JES headers to immediately connected JES sysetms. Eventually
there was a whole library of JES drivers that not only emulated JES
header information ... but if the traffic originated from some other
JES system ... it might be necessary to have the psuedo driver reformat
various JES fields as countermeasure to MVS system crashes. There was an
infamous case of san jose JES/MVS system causing Hursley MVS systems to
crash ... and it getting blamed on the vm370 networking code (because
the vm370 network code was suppose to have all the reformating
implementation from keeping different JES systems causing each other to
crash). misc. past posts mentioning hasp, jes, etc
http://www.garlic.com/~lynn/submain.html#hasp

for customers, were trying to only ship the JES family of drivers with
vm370/cms (eliminating shipping the native vm370/cms drivers ... even
tho they had higher function and higher thruput ... than the JES stuff).
bitnet then started to exceed 200 nodes ... and JES had to finally get
around to shipping support for 999 nodes ... but that was well after the
internal network had passed 1000 nodes (then JES had to increase the
limit to 1999 nodes ... but after when the internal network had already
passed 2000 nodes).

Pat Farrell <pfarrell@pfarrell.com> writes:
Yes, the 3270 style was a lot easier on the main CPU, and provided a
decent user experience when filling out forms. A lot of vendors made
them, including DEC eventually with the VT132/VT131

Using them was very uncomfortable to anyone used to a Tops-10/Tops-20
system, where the computer responded to every character as it was entered.

they were uncomfortable to anybody use to interactive computing.

3277 had a lot of the technology in the computer head ... so it was
possible to make some modifications. a particularly annoying feature was
if you happen to hit a key and the same time the screen was being
written, the keyboard would lock and you have to hit reset to clear it.

one "fix" as a small "FIFO" box ... unplug the keyboard from the head,
plug in the FIFO box and then plug the keyboard into the box ... it
could queue some number of keystrokes to mask the half-duplex operation.
it was also possible to do some soldering inside the keyboard to
decrease the repeat key "delay" and the repeat key "rate".

that all changed in the moved to 3274/3278 ... a lot of the electronics
were moved back into the 3274 controller ... making the 3278 much
cheaper to manufacture ... but further killing it for interactive work.

during late 70s ... somebody from another internal installation was
claiming the best timesharing service in the company ... using .25sec
system response as example. I pointed out that I was running on similar
hardware with nearly identical workload and getting .11sec system
response; they then tried to make some statements that it was never fair
to compare anything to stuff I did.

later with PCs and terminal emulation ... those with ANR 3277 emulation
got significantly higher file upload/download than those stuck with 3278
emulation.

I had done a lot of performance & algorithm stuff as undergraduate that
got shipped in cp67 ... but was later dropped in some of the
simplification that went on in the morph to vm370. The univ. & SHARE
user groups kept lobbying IBM to incorporate my stuff back into vm370
(customer calls for things like the "wheeler" scheduler). I kept doing
cp67 & vm370 stuff all during the future system period ... misc. past
posts mentioning future system
http://www.garlic.com/~lynn/submain.html#futuresys

since future system was going to completely replace 360/370 ... lots of
work stopped on 370. when future system effort was killed, there was mad
rush to get stuff back into the 370 product pipeline. that was in large
part behind picking up lots of stuff I had been doing and shipping it.

however, after that short period (where only relatively small amount
made it out), it was pretty much return to business as usual and little
additional work made it out of internal operation.

one of my hobbies had been building, distributing, and supporting highly
modified/enhanced systems for internal use (independent of others
picking up changes I made for product ship). at one point, I claimed
that I had a (personal) distribution list that was as large as the total
number of MULTICS systems that ever shipped.

Some number of the CTSS people had gone to 5th flr of 545tech sq to
work on Multics. Others had gone to the science center on 4th flr and
did virtual machine systems, internal network technology, interactive
computing, inventing GML (precursor to SGML, HTML, XML, etc) ... some
past posts mentioning 545tech sq
http://www.garlic.com/~lynn/subtopic.html#545tech

there was a little rivalry between 4th and 5th flrs ... but it wasn't
fair to compare the number of MULTICS systems to the total number of
customer mainframe (mostly batch) systems, or even much smaller number
of customer virtual machine mainframe systems, or the much, much smaller
number of internal virtual machine mainframe systems ... however, the
number of internal systems I was personally doing was about the same as
the total number of MULTICS systems.

Pat Farrell <pfarrell@pfarrell.com> writes:
"response time" was a great source of discussion and more, often turning
into marketing speak and into spin and puff.

The key question, usually left unspoken, was "what are you responding to?"

On a TOPS-10 or -20 system one level is responding by echoing the
characters as they are typed. Another is responding to COMMND processing
of escape, ? or tab. Further up scale are actually doing something,
such as responding to a "dir" command.

It gets much more difficult to describe, let alone measure, if you do
something "significant" such as issuing a "find fname = 'Pat'" query to
your DBMS system. On a well running Tops/10 or 20 system, the 0.25
second response would be typical more business applications.

Obviously, doing a complex sequential search through the tables ( a full
table scan in modern DBMS speak) is nearly always going to take much longer.

Meeting a "five second response time, 95% of the time" was a reasonable
contract term in the early 80s for large DECsystem-20s.

i actually simplified a little. the basis was "trivial" (command)
response ... say in the editor and doing some function like finding some
string ... frequently involved something on the order of a dozen or so
page faults. I also simplified that the organization quoting .25sec was
avg. trivial command response ... and I was measuring 95% precentile
.11sec response for the same set of commands/operations.

in the wake of demise of future system effort ... i got to release a
bunch of stuff for vm370 (lots of it from nearly decade earlier as
undergraduate on cp67) ... some amount of it was packaged as separate,
special kernel "resource manager" product ... including the "wheeler"
fair share scheduler. misc. post mentioning wheeler fair share scheduler
(&/or resource manager)
http://www.garlic.com/~lynn/subtopic.html#fairshare

the ".25 sec" response numbers were with my base "resource manager"
product ... but not with a lot of other stuff I did subsequently (to get
the 95percentile ".11 sec" response).

and handling some amount of the technology transfer to endicott for what
became sql/ds.

later when I was doing cluster scaleup ... we worked with some number of
DBMS vendors that had unix implementations as well as vax/cluster
implementations. they had a list of things that they felt was done wrong
in vax/cluster that I needed to correct. for cluster/scaleup work
... did a cluster distributed lock manager (that addressed the short
comings that they believed were in vax/cluster implementation) but
provided an API that mimicked vax/cluster (simplifying the port of their
vax/cluster implementation to cluster scaleup platform).

one of the issues that some of the vendors were doing in single complex
mode (uniprocessor or smp) was some form of fast commit ... i.e. the
transaction considered complete as soon as the log record was written
... but not necessarily the actual buffer record written to DBMS
location (recovery after failure then required rerunning log
transactions to correctly update dbms records). However, in cluster
environment, they would first write record to disk ... if processing was
required on a different system. I worked out the details of being able
to transfer buffer record, piggybacked on same transmission that
transferred lock ownership (effectively doing cache-to-cache copy and
preserving fast commit semantics across clustered machines).

the issue for this wasn't so much the actual transfers ... it was
working out how to correctly merge the sequence of records from multiple
different logs for recovery after a failure.

Happy DEC-10 Day

Anne & Lynn Wheeler <lynn@garlic.com> writes:
the other scenario for the AUPs was the telcos trying to solve the
chicken&egg scenario. fiber &/other new technology drastically increased
available bandwidth ... but telcos were finding impossible to solve
problem ... to significantly increase the bandwidth use ... the price
per bit had to be drastically reduced (even to encourage the
developement new generation of bandwidth hungry applications). The
telcos have significant fixed run rate ... drastically reducing
bandwidth useage rates would have the telcos operating at large loss for
several years before new generation of bandwidth hungry applications
appeared.

The purpose of NSFNET is to support research and education in and
among academic institutions in the U.S. by providing access to unique
resources and the opportunity for collaborative work.

This statement represents a guide to the acceptable use of the NSFNET
backbone. It is only intended to address the issue of use of the
backbone. It is expected that the various middle level networks will
formulate their own use policies for traffic that will not traverse
the backbone.

(1) All use must be consistent with the purposes of NSFNET.

(2) The intent of the use policy is to make clear certain cases
which are consistent with the purposes of NSFNET, not to
exhaustively enumerate all such possible uses.

(3) The NSF NSFNET Project Office may at any time make
determinations that particular uses are or are not
consistent with the purposes of NSFNET. Such determinations
will be reported to the NSFNET Policy Advisory Committee
and to the user community.

(4) If a use is consistent with the purposes of NSFNET, then
activities in direct support of that use will be considered
consistent with the purposes of NSFNET. For example,
administrative communications for the support infrastructure
needed for research and instruction are acceptable.

(5) Use in support of research or instruction at not-for-profit
institutions of research or instruction in the United States
is acceptable.

(6) Use for a project which is part of or supports a research or
instruction activity for a not-for-profit institution of
research or instruction in the United States is acceptable,
even if any or all parties to the use are located or
employed elsewhere. For example, communications directly
between industrial affiliates engaged in support of a
project for such an institution is acceptable.

(7) Use for commercial activities by for-profit institutions is
generally not acceptable unless it can be justified under
(4) above. These should be reviewed on a case-by-case basis
by the NSF Project Office.

(8) Use for research or instruction at for-profit institutions
may or may not be consistent with the purposes of NSFNET,
and will be reviewed by the NSF Project Office on a
case-by-case basis.

mechanical typewriters ... had more than that delay on carriage return.

3270 had two modes for input ... no delay for whole screen of input
(80x24, potentially 1920 chars) ... and enter ... in this case .11
seconds.

there was study of human interactions at research institution in early
70s ... that there was some variation in humans being able to
differentiate between instantaneous response and some delay ... with the
threshold ranging between .10 seconds .25 seconds.

later there was brain scan studies that there showed variation in
different people in the time that signals propagated thru the brain
(which possibly correlated with threshold of differentiating
instantaneous and delay).

there was also a subsecond time-sharing study that consistent
predictable delay ... say always 1second ... was better than random
varying delay ... say varied between .5 second and 2 seconds.

the issue with the previously mentioned FIFO box hack for 3277 (hack
wasn't possible for 3278 because so much of the electronics had been
moved back into the controller) ... was that between the "enter key"
being pushed (say on full-screen of input) and the screen update
... there would normally be a period when the keyboard wasn't taking
input (keyboard would lock up if a key was hit and then the person would
have to stop and reset the keyboard) ... the FIFO box hack allowed the
person just to keep typing w/o having to worry about keyboard locking up
(after 3270 enter was hit).

100wpm at 5chars/word ... is 600cpm (including space/wrod) or 10cps
... or about .10secs/char. so a fast typist would be ready to type the
next char. before 3278 was ready. however 1) 3270 enter key is different
akin to mechanical typewritter mechanical return ... and it would be
sufficient that the delay is predictable and 2) the 3277 FIFO box would
somewhat mask that many 3270 operations were really half-duplex.

[netinfo/gosip-order-info.txt] [ 9/91]
This information was compiled and made available by the National
Institute of Standards and Technology (NIST).
August, 1991

... some snipping

GOSIP Version 1.
----------------
GOSIP Version 1 (Federal Information Processing Standard 146) was published in
August 1988. It became mandatory in applicable federal procurements in August
1990.
Addenda to Version 1 of GOSIP have been published in the Federal Register
and are included in Version 2 of GOSIP. Users should obtain
Version 2.
GOSIP Version 2.
----------------
Version 2 became a Federal Information Processing Standard (FIPS) on
April 3, 1991 and will be mandatory in federal procurements initiated
eighteen months after that date, for the new functionality contained
in Version 2. The Version 1 mandate continues to be in effect.
Version 2 of GOSIP supersedes Version 1 of GOSIP. Version 2
of GOSIP makes clear what protocols apply to the GOSIP Version 1 mandate
and what protocols are new for Version 2.

... snip ...

The following "List of domains generated by Internet Domain Survey
progam, October 1990" had over 9340 domains

6.1.1.2. EARN
We would like to acknowledge and thank Nadine Grange of
the EARN Office in France for the following
information.

EARN, the European Academic Research Network, is the
first general purpose computer network dedicated to
universities and research institutions throughout
Europe, the Middle East and Africa.

The network is widely used for scientific, educational,
academic and research purposes. Commercial and
political use is not allowed, either directly or
indirectly.

EARN is made up of nearly 500 institutions including
universities, European research centers (e.g., CERN,
the European Space Agency, and the European Molecular
Biology Laboratory), and national research centers and
laboratories such as CNRS (France); Rutherford Appleton
Laboratory (UK); CNR, INFN, and CINECA (Italy); DESY,
GSI, DFLVR and the Max Planck Institute (Germany).

EARN also has links to 27 countries including
Yugoslavia, Turkey, Algeria, Morocco, Tunisia, and
Egypt, Iceland, and Luxembourg, to name a few.

EARN is an integral part of BITNET (see Section 1.5.4),
in that it is based on the same protocols and shares
the same name space. Through BITNET, EARN members have
access to equivalent facilities in Argentina, Brazil,
Canada, Chile, Japan, South Korea, Mexico, Singapore,
Taiwan and the United States.

Most of the academic networks in the world can be
accessed through EARN including EUnet, HEPnet, NSFNET,
national European networks such as DFN in Germany and
JANET in the UK, as well as a regional European Network
such as NORDUnet, which links all the Nordic countries
(see Section 6.28).

One of EARN's major objectives is to stimulate
cooperative research, support the day-to-day exchange
of research information, and the execution of joint
projects and publications. Like BITNET, EARN supports
mail, mailing lists, and a type of file transfer. It
provides the LISTSERV mailing list function. Its
facilities also allow users access to remote
applications, databases, and libraries.

EARN is also an international member of RARE (Reseaux
Associes pour la Recherche Europeenne) and cooperates
actively with RARE and COSINE (Cooperation for Open
Systems Interconnection Networking in Europe) on OSI
for the research community. RARE and COSINE are more
fully described in Sections 10.1.5 and 10.1.7.

For information about access to EARN, how to become a
member organization or member country, or any other
general information, contact your country's EARN
representative or:

Happy DEC-10 Day

"Charlie Gibbs" <cgibbs@kltpzyxm.invalid> writes:
Although I worked in Univac's OS/3 group, I heard a story from
over the fence in the 1100 group that their approach to providing
consistent response time was to delay responses that were ready
too soon. I suppose that's easier than making slow responses
faster, but it still didn't sound like Doing the Right Thing.

however, the batch system didn't get anything like dynamic adaptive
resource management until much more recently.

at one point in the early 80s ... there was a corporate stategic
statement that CMS would be the official interactive platform ... and
the TSO group contacted me about possibly considering rewriting the MVS
resource manager (however, TSO has had a lot more performance issues
than just the MVS resource manager). related old email
http://www.garlic.com/~lynn/2006b.html#email800310

things had slightly recovered from the mid-70s when in the wake of
demise of future system effort ... the head of POK had convinced the
corporation to kill vm370, shutdown the burlington mall vm370 group
and transfer all the people to POK ... part of mad rush to get stuff
back into the 370 product pipeline ... supposedly the vm370/cms people
were needed to make the MVS/XA ship schedule, which still wasn't until
the later part of the early 80s.

Endicott did manage to resurrect the vm370 product mission ... but had
to reconsitute the development group from scratch. There is joke that
head of POK was major contributor to VMS ... since so many of the
burlington mall group weren't going to leave the area and went to work
for DEC (also PRIME and some number of other companies in the greater
boston area).

old post with list of corporate locations around the world that added
one or more new network nodes during 1983 (118 locations, from
Amsterdam to Zurich and Vancouver to Johannesburg)
http://www.garlic.com/~lynn/2006k.html#8 Arpa address

one of the issues with corporate links was that they were required to be
encrypted (if they left corporate premise). in '85 time-frame there was
comment that the internal network had over half of all the link
encryptors in the world ... past posts mentioning internal network
http://www.garlic.com/~lynn/subnetwork.html#internalnet

in the 70s & 80s there was enormous difficulty getting gov. approvals
for deploying encrypted links ... especially when they crossed national
boundaries.

with T1 and higher speed links. old email mentioning 3081 processor
doing software DES at 150kbytes/cpu second (would have needed dedicated
two processor 3081K to handle encryption on each end of full-duplex T1
link)
http://www.garlic.com/~lynn/2006n.html#email841115

Daylight Savings Time again

greymaus <greymausg@mail.com> writes:
Kennedy had continued Eisenhower's policy of 'advising', and had sent
in large numbers of 'advisors'. (These men, if asked, should have warned
of what the real situation in Vietnam was. Probably, nobody did.)

there is at least one book that claims special forces had things
relatively stable ... but westmoreland came in and wanted battles to
give regular army officers field command experience (along with
promotions)

Happy DEC-10 Day

jmfbahciv <jmfbahciv@aol> writes:
There were customers which did forms computing. BCTel was a TOPS-10
site which ran MCS. Responding to every character was eliminated
unless the user was monitor mode.

JMF and CDO wrote a corporate architectural specification w.r.t.
this kind of computing. IIRC, the corporate goal at that time
was to be able to do 1000 transactions/second; the cybercrud
assigned to this project was TPS.

Happy DEC-10 Day

Anne & Lynn Wheeler <lynn@garlic.com> writes:
old email from person setting up EARN ... he had previously done a stint
at the cambridge science center ... the following year we exchanged
teenage offspring for the summer:
http://www.garlic.com/~lynn/2001h.html#email840320

from above ..
Following a series of heavy ARVN defeats in May and June 1965,
Westmoreland believed the Viet Cong were moving into the third and final
phase of the insurgency - the fielding of large conventional style
units. Consequently, rather pursuing a counterinsurgency approach based
on population security he designed a strategy of attrition,

... snip ...

i.e. special forces trained to work with local populations.

I don't have the book I original saw the reference ... but was history
of special forces (why was original organization named "10th group"?)
... I've seen copies in major bookstores in war history section.

further search engine turned up quotes from various "google books" (about
westmoreland changing from special forces counterinsurgency to
traditional army operations):
The Dynamics Of Defeat: The Vietnam War In Hau Nghia Province
The Army and Vietnam

this was somewhat repeated in the stories about Boyd's battle plan for
desert storm in contrast to traditional army conventional warfare
approach ... I had sponsored Boyd's briefings at IBM ... some past
posts mentioning Boyd &/or OODA-loopshttp://www.garlic.com/~lynn/subboyd.html

[3] There is actually a mistake on the supposedly "green card" text in that the
machine "Immediate" "space 0 lines" should be X'03', that is, a "no-operation" -
because - think about it - nothing happens!

my gcard.html ... was q&d conversion of (CMS) IOS3270 file that was
widely available internally, I wasn't the original author, but had added
the *sense* information in the file ... most of which came from the
360/67 (blue) "reference card"; see bottom of web page for original
attribution.

gimick that VNET/RSCS used for networking control ("TAG") information
was a no-op ('03') ccw pointing to the control information (with length
of the information) ... aka VNET/RSCS used the cp (unit record) spool
system for all its files ... so everything had to look like a printer or
punch stream (generated with appropriate channel commands, cp treated
the no-op as no-op ... but would still copy the indicated data into its
spool file).

A major internal email client was VMSG. The PROFS group at one point
acquired the source for an early copy of VMSG for the PROFS email client
code. Later the VMSG author offerred to upgrade PROFS with latest VMSG
version that had a lot more function ... the PROFS group denied that it
was using VMSG and then attempted to have the VMSG author fired. That
was suspended when the VMSG author pointed out that every PROFS message
carried his initials in the comment portion of the RSCS network control
("TAG") information.

RSCS/VNET was the dominant internal networking infrastructure ... part
of it was because of the NJE (HASP/JES) heritage using the HASP psuedo
device table to define networking nodes. The internal network was
larger than the arpanet/internet from just about the beginning until
sometime mid-85 or early 86. misc. past posts mentioning internal
network
http://www.garlic.com/~lynn/subnetwork.html#internalnet

The early HASP networking source code changes had "TUCC" out in
cols. 68-71 ... and used the left-over entries in the one-byte index
(255 entry) psuedo device table; installations frequently had 60-80
psuedo (unit record) devices (printer, punch, reader) ... which left
less than 200 for defining networking nodes. By the time JES2/NJE
shipped, the internal network was already over 200 nodes.

Corporate wasn't even going to allow RSCS/VNET to be announced (this was
in the period when POK had convinced corporate to kill off vm370,
shutdown the burlington mall development group and move all the people
to POK ... justification was because POK needed all the people in order
to meet the MVS/XA ship schedule; endicott eventually managed to save
the vm370 product mission, but had to reconsitute a group from scratch;
head of POK is also considered a major contributor to VAX/VMS because so
many people left for DEC rather than move to POK).

The JES2/NJE group did manage to talk the corporation into a joint
JES2/RSCS product announcement. The issue was that even at the minimum
monthly rate that could be charged for corporate product ... it would
still cover the VNET/RSCS development costs. However, there was no
customer forecast at any monthly price that would cover the NJE
development costs (number of customers times monthly rate was always
less than NJE costs; as projected monthly rate went up, the number of
forecasted customers declined). The way out for JES2 was to combine the
RSCS+NJE costs divided by the combined RSCS+NJE customer forecast
... which finally resulted in number the business people could agree
with.

VNET/RSCS had a fairly clean layered implementation ... which among
other things allowed native drivers to co-exist with NJE drivers. In
fact, VNET/RSCS quickly became mechanism to keep different JES2s from
crashing MVS. JES2/NJE had jumbled up networking information and JES2
control information ... and network traffic between JES2 at different
releases could result in JES2 failure, also taking down the MVS system.

As a result, there was a growing library of VNET/RSCS NJE drivers ...
allowing the specific NJE driver for the release of JES2 on the other
end of the link. The VNET/RSCS NJE drivers also had a growing body of
code that would rewrite NJE headers (originating from another JES2
system) to be compatible with the directly connected JES2 system. There
is the infamous case of a modified San Jose JES2 system causing MVS
systems in Hursley to crash ... and it being blamed on VNET/RSCS
(because the Hursley VNET/RSCS NJE drivers hadn't been updated with the
latest countermeasures for keeping incompatible releases of JES2 causing
MVS to crash). misc. past posts mentioning HASP, JES2, and/or hasp/jes2
networking
http://www.garlic.com/~lynn/submain.html#hasp

Native VNET/RSCS drivers continued to be used on the internal network
long after the decision was made to only ship (non-native) NJE drivers
... in part, because the native VNET/RSCS drivers were significantly
more efficient and got higher sustained throughput.

By the time that JES2 got around to supporting 999 nodes, the internal
network had exceeded 1000 nodes ... and by the time JES2 was upgraded to
support 1999 nodes, the internal network had exceeded 2000 nodes. JES2
also had a nasty habit of not only discarding network traffic when the
destination node wasn't defined in its table ... but would also discard
traffic when the origin node wasn't in its table (so you never wanted
JES2 in any critical location in the internal network)

Anne & Lynn Wheeler <lynn@garlic.com> writes:
my gcard.html ... was q&d conversion of (CMS) IOS3270 file that was
widely available internally, I wasn't the original author, but had added
the *sense* information in the file ... most of which came from the
360/67 (blue) "reference card"; see bottom of web page for original
attribution.

360/67 blue "reference card" was filled in with lots of sense info for
various devices ... except for A220 (HYPERChannel link adapter) which I
added ... I was using lots of HYPERChannel boxes in various HSDT
activities
http://www.garlic.com/~lynn/subnetwork.html#hsdt

This particular 360/67 "reference card" is stamped with "M"s name (I
must have borrowed and never returned). GML had been invented in 1969 by
"G", "M", and "L" (first letter of their last names) at the science
center ... which later was standardized as SGML ... misc. past posts
http://www.garlic.com/~lynn/submain.html#sgml

it has been studied quite a bit by then ... however, this particular
study was attempting to counteract a bunch of (early 80s) stuff from
(MVS) TSO and 3274/3278 camps ... that subsecond response didn't provide
any benefit and wasn't needed.

Lots of 3278 use was online (MVS) forms operations (IMS, CICS, etc)
... involving things like transcribing pieces of insurance form; hitting
enter ... which then resulted in dbms update. The enter+dbms transaction
delay was going to be more than small fraction of a second. It was
different than bulk data entry (potentially just being accumulated in
non-DBMS file).

However, TSO & 3274/3278 camps was attempting to then extend that
scenario to interactive computing as justification for not needing
subsecond response for anything.

Lots of the forms entry was also DBMS query operations ... current
analog is lots of web-based browser forms.

my (partial) response to mask the current web delays ... is to queue up
large number of web pages in different browser tabs ... and then browse
the different tabs at local PC response ... rather than having to
sychronously wait for each individual web page.

search engine history, was Happy DEC-10 Day

Maarten van Tilburg <mtilburg@planet.nl> writes:
But SAP could not run on VMS (that was the word, I never knew if it
was actually true, SAP refused to support it), so there we were: the
finanial system was implemented on Windows. With at least one single
point of failure (the database server), a backup system of the
database which was unsecure (Oracle had no on-line backup like RDB) a
needing three times as much people to support the system than the
previous one.

we ran into some large corporations that got into the SAP camp ... and
found themselves spending $50m/annum on SAP consultants.

Happy DEC-10 Day

Charles Richmond <frizzle@tx.rr.com> writes:
Not to forget the APL character set. At the college I attended, the
printing APL terminals were all DECWriters. The characters produced
did *not* work well for "spirit duplicator" style copies. And the Math
and CS departments did *not* have a 2741 typeball. All our APL tests
were hand written. :-(

In the 80s ... I wanted a tool case ... and put in order for standard FE
tool case. I got a lot of push back because I wasn't part of field
service ... but eventually was able to push the order through (looks
like expensive large leather covered briefcase). Lots of the tools in it
appear to related to doing maintenance on selectric typewriters.

lindy.mayfield@SSF.SAS.COM (Lindy Mayfield) writes:
i can do that on vm? create my own instruction?

originally virtual machine system was cp40 done on a 360/40 with custom
hardware modifications to support virtual memory.

when standard virtual memory became available with 360/67 ... cp40
morphed into cp67.

370 was originally announced pretty much the same as 360 ... with a few
new instructions, but w/o virtual memory.

there was a special project jointly between the science center and
endicott to modify cp67, implementing 370 virtual machines (supporting
the full set of unannounced 370 virtual memory features ... various bits
and fields differed from 360 virtual memory architecture).

there was also a set of modifications to cp67 that would run with 370
virtual memory hardware (instead of 360/67 virtual memory). that was up
and running in 370 virtual machine (on cp67 running on real 360/67) a
year before the first engineering 370 with virtual memory hardware was
operational (a 370/145 in endicott).

there was a security issue at the science center since they had some
number of non-employee users of the cp67 system from various
educational institutions in the boston area. so to help avoid
unannounced 370 virtual memory info from leaking; the standard
operation was:

non-employees using the "cp67l" system wouldn't have visability into
what the "cp67h" system was doing in a separate virtual machine (or that
there was 370 virtual machines or "cp67i" systems).

when engineering 370s with virtual memory support became available, they
were normally run with the "cp67i" system ... long before vm370 became
available. Internally, there was also "cp67sj" system ... which was a
"cp67i" system with modifications done by San Jose with device support
for 3330 disks and 2305 fixed head paging devices.

After the 23jun69 "unbundling" announcement ... starting to charge for
application software, SE services, and other stuff ... there was issue
with training for new SEs. misc. past posts mentioning unbundling
http://www.garlic.com/~lynn/submain.html#unbundle

New SEs had previosly gotten a lot of experience ... essentially in an
apprentice position as part of large SE team at customer sites. With
starting to charge for SE serices ... nobody could figure out how to do
the "apprentice" thing. As a result, several internal CP67 virtual
machine datacenters were set up as part of "HONE" ... supposedly to give
SEs in branch offices, remote/online "hands-on" experience running
various operating systems in cp67 virtual machines.

After the initial 370 announcement, a subset of the "cp67h" changes were
applied to the HONE systems ... to allow (non-virtual memory) 370
virtual machines (supporting the new instructions in the original 370
announcement). This would allow SEs to build & test operating systems
for "370" operation. misc. past posts mentioning HONE:
http://www.garlic.com/~lynn/subtopic.html#hone

the science center had also ported apl\360 to cms for cms\apl. some
number of sales & marketing support applications started to be
implemented (in apl) and also deployed on HONE. eventually the sales &
marketing (apl) applications became so extensive that they eventually
completely crowded out the SE virtual operating system activity. At some
point, branch office sales had to process customer orders thru various
HONE applications before they could be submitted (and HONE datacenters
started to pop up around the world). One of my hobbies was supporting
HONE operation ... and as a new employee fresh out of college ... I got
some number of overseas trips as part of the HONE proliferation.

I mentioned something similar after the greencard spam in the
90s. Issues raised why they wouldn't 1) they make money from the
spammers, 2) they didn't want potential legal actions if they started
blocking and something slipped through, 3) routers and servers
used by most ISPs didn't have the processing capability
http://en.wikipedia.org/wiki/Newsgroup_spam

A counter to #3 was that ISPs were starting to block multiple concurrent
connections for the same account (like multiple dialup connections) ...
and if they could figure out that ... they could reasonably figure
out a spamming profile to block

Not long later was on business trip to Scottsdale and was having dinner
at a mexican restaurant in oldtown. A couple came in and was seated
behind us and they were then joined by a man who sat behind me. The man
spent an hr explaining how he did his spamming and how he could do it
for their commercial website ... and some advice about their server
configuration to ignore any irate responses to the spam.

Walter Bushell <proto@panix.com> writes:
And that will end Social Security. Once it's seen as a welfare program,
it's death is certain whether by immediate cut or by neglect. Or just by
understating the rate of inflation when computing benefits, as now.

nearly all the (pay-as-you-go) retirement benefit plans were based on
the baby boomer generation being so much larger than the previous
generation ... and their enormous wage earnings during their prime
working years ... along with improved higher educational level and
higher earning jobs (for that matter not just retirement benefits but
nearly all the gov. funded operations dependent on tax revenues).

that is being inverted as the baby boomers retire ... and the following
generation is only half the size and less well educated.

sort of chicken and egg ... only half as many people and less well
educated ... contributed significantly to jobs moving out of the
country.

at annual state governors convention in early 90s ... they looked at the
falling education level and had a study that indicated if they could
bring back the boomer STEM education levels ... it would add couple
percent to GDP growth (along with more and higher paying jobs). STEM
just kept falling rather than improving (along with the jobs).

Eric Chomko <pne.chomko@comcast.net> writes:
But using the concept to write an OS, always written in assembly code
in the old days, in a high-level language did come out before RISC.
So, why not make a limited instruction set to force the users to go
that route already in place. Bootstrap languages, high-level
languages used to write OSs, and THEN RISC... Here we are today.

RISC ... reaction to (failed) future system ... go to the opposite
extreme in hardware complexity ... and use pl.8 language and cp.r system
to compensate for hardware short comings.

most of the (801/risc) Iliad chip projects got canceled for one reason
or another ... resulting in some number of the engineers leaving and
going to work for other vendors on next generation of risc efforts.

(801/risc) ROMP chip project (joint between research and office
products) was pure pl.8 and cp.r ... for displaywriter follow-on. for
various reasons that got canceled (one was the minimum ROMP
displaywriter entry point ... price & performance; was above the top end
of the existing displaywriter market place). they were then looking
around and decided on using it for the growing unix workstation market.

prices of chips/hardware declined to the point where it became much less
expensive to produce hardware for computers ... however, proprietary
operating systems was still barrier. unix was emerging as an relatively
inexpensive alternative. the ROMP group then hired the company that had
done PC/IX ... to do aix v2. There was an issue with what to do with all
the pl.8 programmers ... so they defined something called the VMR
(implemented in pl.8) that provided a abstract virtual machine ... and a
claim that it would take the PC/IX company much less time to port to the
abstract virtual machine interface (than to the bare hardware).

This was somewhat disproved when the palo alto group did a port of BSD
to the bare hardware. It was also a pain for things like new device
drivers ... having to do both a pl.8 VRM device driver as well as a C
unix device driver.

There were also some number of tweaks that had to do for ROMP for unix
... since original 801/ROMP didn't have things like protection/privilege
domains. the claim was that pl.8 would only produce correct code ... and
cp.r would only load correct pl.8 code for execution. moving to unix at
least required hardware support separation between kernel and
application programs.

the 801/risc virtual memory segment registers & inverted tables
simplified (hardware) virtual memory operation. the issue was (at
least for most of the 32bit address lifetime) 16 256mbyte segment
registers made for a more difficult "sharing" model. the pl.8/cp.r
response was that with no protection domains, the pl.8 application
code would be able to switch segment register values as trivially as
changing addresses in general purpose registers.

another scenario was that there was no cache consistency ... so no
multiprocessor ... also no cache consistency between i-cache and d-cache
... so things like loaders that would bring in programs and possibly
operate on them as data as part of preparing for execution ... needed
special instructions to force data from d-cache lines back to memory
and potentially invalidate corresponding addresses in i-cache.

John liked to go out drinking after work ... and good part of the 80s I
lived in San Jose but worked for Yorktown ... so commuted from San Fran
to Kennedy a couple times a month (work Monday in san jose and take
redeye to kennedy and be at the office by 7am Tuesday). sometimes I
would get shanghi'ed into going drinking with John Tuesday night (after
only 4hrs sleep the night before) and he would want to stay out until
wee hours of weds. morning.

part of the studies from the early 90s referred to possible tipping
point ... because of declining domestic educational levels ... more and
more foreign workers were imported to fill high-skill jobs ... but a
critical mass of foreign workers could be reached ... where it would
switch from importing the workers here .... to the jobs going to where
they are (sort of analog to the concentration of skilled jobs in silicon
valley from the last century ... but no longer in this country).

besides the retiring baby boomers increasing the ranks of retirees by a
factor of four (and therefor increasing benefits payout by factor of
four) ... and their replacements being only half as many and lower level
of education (so ratio of productive workers being taxed to pay for the
benefits ... to the number of retirees receiving benefits drops by
factor of eight ... or generation following baby boomers has to work to
support eight times as many retirees) ... the generation following baby
boomers is going to account for drastically decreased consumer spending
(half as many, as well as lower-skill, lower-paid jobs) ... decreased
consumer spending will result in further loss of jobs ... which results
in further decreases in consumer spending ... "negative feedback loop".

The transition to economy with much smaller ratio of workers to retirees
as well as lower paid and lower skilled jobs ... is likely to be traumatic
before things reach some stable economic level ... but likely with
nearly everybody having much lower standard of living.

Anne & Lynn Wheeler <lynn@garlic.com> writes:
The transition to economy with much smaller ratio of workers to retirees
as well as lower paid and lower skilled jobs ... is likely to be traumatic
before things reach some stable economic level ... but likely with
nearly everybody having much lower standard of living.

walker made comment that fiscal responsibility bill expired in 2002
... and it was after that ... was when he starting speaking out
(comments like no congressmen for the last 50 yrs have been capable of
middle school arithmetic). he made some reference that gov. debt is now
$500,000 per person.

mike@MENTOR-SERVICES.COM (Mike Myers) writes:
On thinking it over some, it had to be early 1968, ending around March
or so (I just reviewed my CV and found I went to FE education in April
of 1968). So maybe it was closer to Release 11 or 12. Wasn't release
14 actually 14/15 (not that that means anything, just a recollection)?

part of presentation at '68 Atlantic City Share meeting on bunch of performance
enhancements i had done at the univ to both mft14 & cp67. cp67 was
installed at univ. where i was undergraduate ... and also os/360 support.
http://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14

i had been doing highly customized os/360 sysgen's attempting to
radically improve thruput ... as well as being able to do (at least)
stage2 sysgen in production jobstream. I did lots of re-ordering stage2
sysgen to carefully place files and pds members on disk to optimize disk
arm motion (i got about 3times thruput improvement for typical student
fortran job stream ... this was before watfor and student jobs going
thru standard fortgclg).

for cp67 ... i rewrote lots of kernel to significantly reduce
pathlength.

os/360 releases i did for mft were 9.5, 11, and then 14. combined 15/16
came out ... and i did a mvt generation (mvt was starting to get to
point that it was more reliable). big thing i remember about 15/16 was
that it introduced format enhancement and being able to specify cylinder
for vtoc (rather to defaulting to cylinder zero). I placed vtoc in
middle of system packs ... and then attempted to force placements
radiating out from the middle of the pack on both sides of the vtoc.

one of the problems were that typical PTF activity "replaced" PDS
members ... and 5-6 months of system PTF activity could significantly
degrade my carefully optimized thruput ... i.e. pds members being
replaced in system datasets ... creating lots of gas ... standard pds
compression didn't offer anyway of controlling member ordering. If I
wasn't planning on doing near term sysgen for new system ... i would
have to rebuild system to get back disk arm optimization.

end of '68, boeing was putting together basis of boeing computer service
(BCS) ... moving datacenters from cost center to P&L basis ... at least
on paper. As part of concept of "selling" services ... they wanted to
add CP67 online timesharing ... and be able to sell CP67 internally
within Boeing ... but also to external organizations (somewhat akin to
some of the other commercial online cp67-based timesharing service
bureaus that had been formed). Spring break, '69, they con'ed me into
teaching one week class for the burgeoning BCS technical staff. Boeing
then brought me in for summer ... they did some sort of paperwork that
listed me as mid-level fulltime employee (that got me special parking
lot privileges at boeing field). They brought in new 360/67 "simplex"
... installed in the hdqtrs machine room next to 360/30 that was doing
payroll (deal was also cut that for the summer work, i also got some
"special project" academic credit towards graduation).

The boeing huntsville two-processor 360/67 smp was also moved up to
seattle that summer. It had been running as two single processors with
modified version of mvt13. Boeing huntsville was supporting a lot of
long running 2250 graphics applications. os/360 had significant problem
with storage fragmentation with long running jobs. mvt13 had been
modified to run with 360/67 virtual memory tables ... it didn't do any
paging ... but used to re-org storage mapping to make it look contiguous
(countermeasure to os/360 storage fragmentation).

Michael Wojcik <mwojcik@newsguy.com> writes:
Yep. The HTTP request/response model, plus the relatively high latency
of the verbose HTTP protocol, plus the links-and-forms orientation of
HTML, make traditional web apps very much block-mode.

The point of AJAX, of course (and crossing threads again), is to
emulate a lower-latency interaction model on top of HTTP+HTML, by
pushing the slow network I/O into the background and using client-side
processing to provide a fancier front end. It's just the block-mode
versus character-mode dichotomy again.

later leave their posts and show up at a small client/server startup
responsible for something called the "commerce server". by then we had
also left ... in part because the cluster scaleup work had been
transferred, announced as supercomputer (numerical intensive only ... no
dbms stuff), and we had been told that we couldn't work on anything with
more than four processors; ... misc. old email related to cluster
scaleup
http://www.garlic.com/~lynn/lhwemail.html#medusa

so they wanted to do payment transactions on their server ... and the
startup had invented this technology called "SSL" they wanted to use.
We got this thing called "payment gateway" deployed ... using ha/cmp
configuration (from the non-scaleup part of the cluster work) ...
some past posts
http://www.garlic.com/~lynn/subnetwork.html#payments

there were various and sundry other things around the server edges for
various security and internet attack resistance.

During this period ... there was rapid growth in both HTTP and HTTPS
... and both HTTP and HTTPS were seeing severe performance problems.
HTTP/HTTPS were using TCP ... which had been designed/implemented for
long-running sessions ... not for quick transactions. TCP had a minimum
7 packet exchange operation with relatively long tail in FINWAIT. High
rate of HTTP activity and the TCP FINWAIT list exploded ... most
implementations started finding that webservers were spending 95% of the
processor running FINWAIT list. The small client/server startup had
webservers for downloading their products ... and were adding servers
almost as fast as they could be installed. Finally they installed a
SEQUENT machine running Dynix ... and the problems cleared up ...
SEQUENT had already fixed the long FINWAIT list issue in DYNIX to handle
installations with 20,000 (real long running) telnet sessions. It took
the other vendors another six months or so before there was new releases
addressing the FINWAIT problem.

but there is also this DNSSEC stuff ... which SSL Certification
Authority industry has somewhat been backing ... because it
helps with the integrity of their certification processes
for SSL digital certificates ... but it also represents
a catch-22 for that industry
http://www.garlic.com/~lynn/subpubkey.html#catch22

one of the things that is part of DNSSEC is the ability to register
public keys with the domain name authority ... and use the DNS
infrastructure to do real time retrieval of public keys ... w/o the need
for digital certificates.

... however, one of the features of XTP is a minimum of 3-packet
exchange for reliable transmission (compared to the 7-packet minimum for
tcp). If DNSSEC public keys were to be registered and clients could
request any public keys to be piggy-backed on the response to domain
name lookup (aka request to translate domain name to ip-address) ...
combined with XTP ... it would be able to do a HTTPS-light in
three-packet exchange w/o need for any of the digital certificate
processing gorp.

the client gets the server's public key back in the same DNS response
that it gets the server's ip-address. It then generates a random
symmetric key ... encodes the transaction with the symmetric key ... and
encodes the symmetric key with the server's public key. It then sends
off the (XTP) transaction with the encoded symmetric key followed by the
encoded transaction. The server gets the transactions, decodes the
symmetric key with the server's private key ... and then decodes the
transaction with the random symmetric key. The server then generates the
response ... first encoding it with the client's random symmetric key
and sends back the encrypted response. The client then decodes the
response with the symmetric key that it had previously generated.

purely single round-trip ... with the same encryption strength of
standard HTTPS ... but w/o all the extraneous round-trips and
certificate overhead processing.

part of this was from responding to some payment protocol specification
work in the mid-90s that was looking at an fully end-to-end payment
protocol with appended (payment industry) digital certificates. However,
the standard digital certificate payload is about 100 times larger than
the base payment transaction payloads ... and add about 100 times in
processing. misc. past posts discussing the enormous processing and
payload bloat of some of these payment protocol specifications
http://www.garlic.com/~lynn/subpubkey.html#bloat

mike@MENTOR-SERVICES.COM (Mike Myers) writes:
I worked on a couple of projects in my two years there, CLEAR, which
was a library source management system used to maintain OS/360 source
code and the MEMMAP project mentioned earlier.

one of the problems that the JES2 group had was that they did all their
source maintenance on CMS using CMS source maintenance ... recent
JES2/HASP networking reference
http://www.garlic.com/~lynn/2010b.html#44 sysout using machine control instead of ANSI control

originally done for the multi-level updates for "cp67l", "cp67h", and
"cp67i" systems. The implementation/design was eventually directly
supported by the editors and update program (rather as exec front-end
processes) and eventually used for both cp67 and vm370 products ... and
shipped as part of doing customer source level maintenance (i.e. fixes &
updates were shipped as source updates).

SYSENTER/SYSEXIT_vs._SYSCALL/SYSRET

Terje Mathisen <"terje.mathisen at tmsw.no"> writes:
Anything that updates a real memory location every us is a performance bug!

If you instead use a memory-mapped timer chip register, then you've
still got the cost of a real bus transaction instead of a couple of
core-local instructions.

one of the justification for the 370 timer facilities. 360s had location
"80" timer in low-store. lower-end 360 modules updated in millisecond
range ... higher end 360s updated low order bit every 13+ microseconds.

for compatibility, 370s did provide support for location 80 timer but at
the millisecond range.

univ. where i was undergraduate had 360/67 (that had "high-speed"
location 80 timer). I had been doing a bunch of enhancements to (virtual
machine) cp67 ... one of which was adding tty/ascii terminal support to
cp67. part of this was I attempted to do something with the 2702
terminal controller that it couldn't quite do (but should). somewhat as
a result, the univ. started a clone controller project ... using an
interdata/3, reverse engineer the 360 channel interface, build channel
interface board for the interdata/3, program the interdata/3 to emulate
2702 controller with some additional function (later four of us got
written up for being responsible for mainframe clone controller
business).

some early controller tests resulted in bringing down the 360/67
(hardware "red-light"). the issue was the memory bus was shared between
processor, the location 80 timer, and i/o channels (and these were
non-cache machines). the location 80 timer had some leeway if the bus
was in use when timer tic'ed ... but if the timer tic'ed again ... and
there was previous timer memory update still pending ... the machine
would stop/red-light.

had to go back and redo the controller channel board to make sure that
it periodically told the channel to release the memory bus (in middle of
transfers) so that any pending timer tic update could occur.

SYSENTER/SYSEXIT_vs._SYSCALL/SYSRET

nmm1 writes:
Er, no. How do you stop two threads delivering the same timestamp
if they execute a 'call' at the same time without having a single
time server? Ensuring global uniqueness is the problem.

one the requirements was to correctly order dbms transaction log records
after a failure (for recovery). a standard dbms speed-up is to allow
transaction to be considered committed after the corresponding log
record has been written to disk ... but the altered record in buffer
memory may not be pushed out to dbms location (lazy writes to DBMS disk
location).

recovery (after failure) requires using the log to sequentially "rerun"
the transactions ... eventually getting the dbms image on disk to
consistent state.

a cluster dbms implementation use to force record to disk before
allowing it to migrate into DBMS buffer on a different processor. to
speed things up, it would be possible to allow modified record to be
transmitted (over high-speed link) between dbms buffers (in different
processors in cluster). the problem then is that there could be multiple
committed transaction changes ... recorded in different dbms logs
... but not reflected in the DBMS record.

as part of supporting direct buffer-to-buffer copies (w/o having to
force out to disk) ... a mechanism was needed (for recovery) to merge
transaction logs from different systems so that they have the original
global temporal ordering. The requirement isn't actually to have exact
time value for each transaction ... but to have multiple logs to be
merged so that entries occured in the original sequence. unique accurate
time works ... but so would nearly any unique monotonically increasing
number (say like a transaction version number ... which could be
supported as part of the operation of dbms cluster distributed lock
manager ... which also piggy-backs buffer-to-buffer record copies as
part of lock traffic).

Michael Wojcik <mwojcik@newsguy.com> writes:
Certainly, as Lynn has pointed out (more than once), there was some
backlash against the excessive complexity of Future Systems, at least
in the development of the IBM RISC processors. But in the early years
of outright RISC vs. CISC competition, RISC architectures were
developed on the theory that simple instruction sets with load/store
architectures could be implemented with fewer, shorter cycles and deep
pipelines, improving overall performance.

the instruction set simplification with 801/risc in the 70s didn't
bother me so much (modulo the lack of compare&swap) ... it was the lack
of cache consistency for implementating multiprocessor configuration and
small number (16) of "segment" objects in the 32bit address space.

as previous note I tried to work out mechanism for packing "small shared
segments" into the 801 scheme.

as long as it was pl.8 and cp.r ... the lack of hardware protection was
presumably fine ... but that went out the window attempting to adapt to
being unix workstation in the 80s (as well as the not supporting large
number of shared memory objects).

as mentioned in some of the old 801 email ... after various (801/risc)
Iliad chip efforts floundered ... some number of engineers left to work
on risc at other vendors (early 80s).
http://www.garlic.com/~lynn/lhwemail.html#801

the problem with lack of compare&swap showed up with rs/6000 tho
... even tho there wasn't any multiprocessor support.

charlie had originally invented compare&swap (CAS are his initials) when
working on fine-grain locking for cp67 multiprocessor. The initial
attempts to get it included in 370 was rebuffed with a challenge that it
needed a non-smp justification. thus was born the application
multithreaded examples.
http://www.garlic.com/~lynn/subtopic.html#smp

by the time rios (rs/6000) ships ... many of the dbms implementations
had adapted to compare&swap use (available on multiple different
machines) ... lot more efficient than having to implement much of dbms
thread serialization via kernel calls. porting various dbms to rios w/o
compare&swap (even w/o multiprocessor), put rios at thruput
disadvantage.

in non-multiprocessor environment ... primary semantics is being atomic
and non-interruptable (not actually having to worry about serializing
concurrent storage accesses). the rios (rs/6000) aix solution was
special fastpath system call ... implemented in the system call
interrupt routine and immediately returning (the advantage was that the
system call interrupt switched to disabled for interrupts ... primarily
i/o ... achieving the fundamental requirement for "atomic"
compare&swap).

moved over to head up somerset. in some sense, somerset could be
considered melding rios and motorola's risc 88k for power/pc (and fixing
other 801/risc trade-offs that weren't really applicable to unix &
C-language environment ... or in apple's case, its unix-like mach).

i think IETF meeting in aug '88(?) with presentation on slow-start
... was also the acm sigcomm meeting ... with paper on why windowing
algorithms won't reach stable state in large bursty internet.

a problem in large bursty internet was to avoid large back-to-back
packets at intermediate nodes overloading buffers ... and in large
bursty internet ... with windowing-based algorithm ... ACKs had a
tendency to bunch up on the return path. Burst of ACKs arriving all at
the same time ... resulting in opening up the window and doing multiple
back-to-back packet transmissions ... resulting in intermediate node
congestion and overrun. the result was that things could get into
pathelogical oscilation with slow-start building up size of the window
... and then having to drop back.

In that time frame ... i was doing rate-based pacing ... one of the
other presentations at that IETF meeting ... was on gigabit
cross-country internet ... and the amount of data in bandwidth*latency
product. I had nearly the identical bandwidth*latency product on some
slower speed satellite links starting a few years earlier ... and was
rate-based pacing ...
http://www.garlic.com/~lynn/subnetwork.html#hsdt

the token-ring camp was making pitches about ethernet effective thruput
was less than mbit ... but I conjectured that was based on using
simulation with very early 3mbit ethernet before
listen-before-transmit. In any case, almaden research center had wired
with CAT5 anticipating predominate 16mbit t/r deployment ... but found
that 10mibt ethernet actually had both higher effective aggregate
thruput as well as lower latency (over the same wires).

Happy DEC-10 Day

Michael Wojcik <mwojcik@newsguy.com> writes:
I'm very dubious. The X.509 certificate exchange in SSL/TLS provides a
lot more than the server's public key (and there would still have to
be a session key exchange, etc). PKIs are not simply interchangeable
like that.

Even if we assume the DNS response is trustworthy (and I'm dubious
about DNSSEC's ability to achieve that), the pair {server name, public
key} doesn't tell the client anything about the provenance of that
key, or other identity or authorization claims being made by the
server. For all the many faults of X.509, at least it's a hierarchy.
Just because some DNS server tells me that foo.com has public key X
doesn't mean I have any reason at all to trust a session I open using
that key.

as part of doing this stuff with small client/server startup that
wanted to do payment transactions and wanted to use this technology
they had invented called "SSL" ... we had to do walk-thru/audits
of various of these new organizations calling themselves Certification
Authorities and issusing these things called digital certificates
http://www.garlic.com/~lynn/subpubkey.html#sslcert

SSL domain name certificate authorities have an issue with the
certification process ... they aren't typically the authoritative agency
for the information being certified (the information carried in the
digital certificates). The problem is that the Domain Name
Infrastructure is the authoritative agency for domain name ownership
... and there are some number of vulnerabilities with domain name
take-over ... and then applying for valid digital certificate ... and it
being granted.

Part of fall-out from DNSSEC ... for the domain name certification
authority industry ... is requesting that a public key is registered as
part of domain name registeration ... then future communication is
digitally signed (and the DNS infrastructure can verify the digital
signature with the onfile public key ... as countermeasure to domain
name take-over) ... note ... there is no digital certificates involved
... misc. past posts on certificate-less public key
http://www.garlic.com/~lynn/subpubkey.html#certless

also, currently, the certification authority industry has to require a
lot of identification information from domain name digital certificate
applicants. They then do an error-prone, expensive, and time-consuming
matching process between the supplied information and the information on
file with the domain name infrastructure (as to the true owner of the
domain). With onfile public keys ... they could convert to just
requiring domain name digital certificate applicants to digitally sign
the appication. Then the certification authority can do a real-time
retrieval of the onflie public key from the domain name infrastructure
... and do a much more reliable, efficient, and less expensive signature
verficiation.

there are several catch22s for the certification authority industry.
first, domain name digital certificates were, in part, justified on
various perceived integrity issues with the domain name infrastructure.
Improving the integrity of the domain name infrastructure (such that the
certification authority industry can better trust the information as
part of their certification), reduces the originally justification for
the digital certificates. Also, if the certification authority industry
starts demonstrating that they can trust & rely on the onfile public
keys ... then it is possible that others might also decide that they
could rely on the DNS infrastructure, onfile public keys (further
eliminating the justification for domain name digital certificates)
http://www.garlic.com/~lynn/subpubkey.html#catch22

digital certificates are analog from the letters of credit/introduction
from the sailing ship days ... when the relying party had no other
mechanism for information in first time dealing with perfect stranger.
the original scenario for digital certificates were the offline
electronic email from the early 80s ... when there would be a phone call
to electronic post-office, email exchanged and then phone hung up. then
a person processing the email might be faced with first time
communication with complete stranger and had no other recourse to
information aboth the entitty they were dealing with.

the problem as the internet became more & more pervasive and normal
state of affairs was online and connected ... the original
justifications for digital certificates were less & less frequently true
... and they became redundant and superfluous.

A case in point were some of the digital certificate based payment
protocol specifications from the early/mid 90s. The consumer would
register their public key with their financial institution and be issued
a relying-party-only public key (after storing the consumer's public key
in their account record). Then the consumer was expected to digitally
sign every payment transaction and append their digital certificate for
routing back to their financial institution. Their financial institution
then retrieved the corresponding account record for executing
transaction ... and would be able to verify the digital signature with
the onfile public key. Appending the digital certificate represented
an 100-fold payload bloat for typical payment transaction and any
processing of the digital certificate represented a 100-fold processing
bloat for payment transaction. That was separate from being able to
trivially demonstrate that appending the digital certificate was
redundant and superfluous. misc. past posts about relying-party-only
digital certificates
http://www.garlic.com/~lynn/subpubkey.html#rpo

Somewhat as part of having done this stuff now called "electronic
commerce", in the mid-90s, we were asked to participate in the x9a10
financial standard working group which had been given the requirement to
preserve the integrity of the financial infrastrucutre for all retail
payments (debit, credit, stored-value, point-of-sale, internet,
face-to-face, unattended, high-value, low-value, transit turnstile
... aka ALL). The result was the x9.59 financial standard
http://www.garlic.com/~lynn/x959.html#x959

One of the things that x9.59 did was slightly tweak the paradigm and
eliminated the requirement for "hiding" the transaction details (and
account number) ... being able to use a digital signature ... w/o
requiring an appended digital certificate.

Now the major use of "SSL" in the world today is for encryption related
to hiding transaction details and account numbers ... however, with
x9.59 there is no longer any requirement to hide that information ...
and therefor also eliminates the major use of "SSL" in the world today.

and several of the participants were also involved in privacy issues
... and had done detailed consumer studies. they found the number one
issue was identity theft ... primarily account fraud identity theft
resulting in fraudulent financial transactions ... and a major source
of the information for those fraudulent financial transactions was
coming from data breaches. Since there seemed to be little or nothing
being done about data breaches ... they conjectured that publicity from
mandatory data breach notification would motivate corrective action.

again, the major breaches that make it into the news involve leaking
transaction details and financial account numbers. now x9.59 standard
did nothing about preventing such breaches ... but it eliminated the
requirement to have to hide &/or prevent account information from being
divulged as part of preventing fraudulent financial transactions. x9.59
standard didn't do anything about preventing such breaches ... instead
it eliminated the major common threats or exploits that might occur as
result of the information leaking out.

Now there were some digital certificate oriented financial standards
effort going on in parallel with x9.59. one of them recognized the
enormous payload-bloat for financial transactions that comes with
appending digital certificates ... so they had a standards effort to
work on "compressed" digital certificates. However, using their
techniques for producing "compressed" digital certificates ... I
trivially showed that it was possible to compress a digital certificate
to zero bytes. Then instead of x9.59 being a certificate-less protocol,
it could be a digital certificate protocol, mandating that every x9.59
transaction had to include a zero-byte appended digital certificate.

Happy DEC-10 Day

Anne & Lynn Wheeler <lynn@garlic.com> writes:
from the sailing ship days ... when the relying party had no other
mechanism for information in first time dealing with perfect stranger.
the original scenario for digital certificates were the offline
electronic email from the early 80s ... when there would be a phone call
to electronic post-office, email exchanged and then phone hung up. then
a person processing the email might be faced with first time
communication with complete stranger and had no other recourse to
information aboth the entitty they were dealing with.

the problem as the internet became more & more pervasive and normal
state of affairs was online and connected ... the original
justifications for digital certificates were less & less frequently true
... and they became redundant and superfluous.

and doing SSL between the webservers and the gateway (which exchanged
payment transactions between the internet and the payment networks).
first, we started off mandating "mutual" authentication (which didn't
exist at the time we started). Before we were done, it was also
necessary to register the payment gateway with the webserver
(invalidating digital certificate assumption about webserver doing first
time communication with strange payment gateway) and register webservers
with payment gateway (invalidating digital certificate assumption about
payment gateway doing first time communication with strange webserver).

by the time everything was done and operational ... it was trivially
obvious that digital certificates were redundant and superfluous ... but
were installed anyway ... as as side-effect of the "SSL" public key
library being used.

Michael Wojcik <mwojcik@newsguy.com> writes:
The Kingston group that took over your cluster stuff and repurposed it
for number-crunching - was that the same group that had done the
Scientific Visualization System / Data Explorer work? I worked on the
RS/6000 version of that software (Data Explorer/6000) in my last
couple of years in Cambridge.

it involved high-speed links ... both terrestrial and satellite and in
some places HYPERChannel adapters.

there was an engineering & scientific group in kingston ... and
in the early part of 80s ... I had a T1 link between san jose and the
kingston engineering & scientific group over the SBS T3 IBM
satellite network. There were a dozen(?) or so c-band T3 tdma
IBM-dedicated earth stations that SBS had at various plant-sites
... and I had tail-circuit in san jose to the San jose earth station
... and then tail circuit from the Kingston earth station to the
kingston engineering and scientific group. That E&S organization
in Kinston at one point had 3090 with vector processing and numerous
Floating Point Systems boxes ... and were doing things like molecular
modeling. I think the scientific visualization was being done out of
the E&S organization.

Then we got our own dedicated HSDT TDMA earthstations and our own
dedicated transponder ... a HSDT TDMA earth station went into Yorktown
on the east coast ... and the HSDT link to the Kingston E&S group
switched from being circuit to the Kingston IBM earth station ... to
the HSDT earth station in yorktown.

I didn't pay a lot of attention to the organization in Kingston
... but at some point there was a project to design an IBM
"supercomputer" sponsored by a senior corporate executive ... it
wasn't clear the lines between the (newer) supercomputer project and
the kingston engineering&scientifc. The project in kingston
supposedly designing a supercomputer was also providing a lot of
funding to Steve Chen ... a couple recent threads mentioning Chen:
http://www.garlic.com/~lynn/2009o.html#29 Justice Department probing allegations of abuse by IBM in mainframe computer market
http://www.garlic.com/~lynn/2009s.html#5 While watching Biography about Bill Gates on CNBC last Night
http://www.garlic.com/~lynn/2009s.html#42 Larrabee delayed: anyone know what's happening?
http://www.garlic.com/~lynn/2009s.html#59 Problem with XP scheduler?

then in oct91, the senior executive sponsoring the supercomputer
effort retired ... and there appeared to be serious audits of some
number of projects. My impression was that was when they started
looking for technology to transfer to Kingston. The Kingston
organization announced a world-wide internal technology conference for
mid-jan 92. We advised some of the engineers not to attend because
there was possible consequences if Kingston's attention was
attracted. Then things happened very, very quickly.

Anne & Lynn Wheeler <lynn@garlic.com> writes:
group. That E&S organization in Kinston at one point had 3090 with
vector processing and numerous Floating Point Systems boxes ... and were
doing things like molecular modeling. I think the scientific
visualization was being done out of the E&S organization.

discusses doing HYPERChannel work for the IMS groups in STL and Boulder
(in 80/81). The IT guy I was working with in Boulder then transfers to
Kingston E&S group ... and I work with him on the HSDT link into the E&S
operation.
http://www.garlic.com/~lynn/subnetwork.html#hsdt

johnf@panix.com (John Francis) writes:
The company I work for sells airline schedule lookup tools
that can be incorporated into a web page. We take in a few GB
of data on a daily basis, digest it down to a smaller format
(anywhere from 1MB to 10MB), and have a wicked fast lookup
engine running on our servers.

recent post mentioning being asked to consult with large airline res
system ... and coming back with rewritten "routes" ... involved taking
everything from the full machine readable industry OAG (at the time over
4000 airports world-wide & over 400k flt segments ... or 600k flts
... depending on how things were counted ... some equipment flew with
multiple flt designations) .... digested down to much smaller format
... and did wicked fast loopup on nearly any machine with at least
32mbytes of real storage (would operate as interactive PC application or
as client/server operation).
http://www.garlic.com/~lynn/2010b.html#13 Larrabee delayed: anyone know what's happening?

At the time, "routes" represented 25% of the processing load on their
res. system. As mentioned, the initial pass ran 20 times faster than
their mainframe implementation ... and for version running on rs/6000
320 ... some careful processing organization for 6000 cache sensitivity
... got another five times improvement (overall 100 times improvement).
Then redid the sequence ... so that several human interactions were
collapsed into single interaction. That single interaction then was only
about ten times faster than any of the individual original interactions.

As mentioned it could run on just about any PC or workstation with
32mbytes of real storage ... and ran either as interactive application
or configured to run client/server.

johnf@panix.com (John Francis) writes:
We do the front-end stuff (the stuff that Orbitz/Expedia/... do); find
which of all those flight segments (almost 2M of them nowadays) make
sense for a particular itinerary (including multiple flight segments,
and taking into account Minimum Connect Times and Traffic Restriction
Codes). You say you want to get from Strasbourg to Ann Arbor, say,
and we take it from there.

I took it back to demo ... and they had some things that they considered
hard to do ... they asked for some unknown origin airport in kansas and
some equally unknown destination airport in georgia; instantaneously
found route that involved five connections and move than 24hrs elapsed
time (hint: kansas and georgia weren't in the same country) ... also
most minimum connect times. This was something at the time, that they
couldn't do. It was actually faster than for doing SFO to Kennedy
... because there are much larger number of possibilities.

I had minimum connect times for most airports between different gates in
the same airport ... what i didn't have in that first version was
minimum connect times for New York or Washington ... both are "generic"
names for multiple airports ... where arrival at one gate may then
involve 90min bus ride to connection out of a different gate (plus
redoing security).

also got a separate file that gave latitude/longitude for every airport
in the world ... and so could draw simplistic (straight line) for flt
segments between airports.

part of the difference was that the traditional "routes" had evolved
from the 60s as a lookup of database with predetermined ways of getting
from A-to-B.

I had recently done a port of 60k statement vs/pascal program to other
vendors (mostly workstation) platforms (part of IBM moving off of
large set of internal proprietary vlsi design tools to industry tools
... one of the approaches was giving the internal tools to industry
tool companies) ... which did automated physical layout. While I did
the airline res in C rather than pascal .. there were some
similarities ... although the airline res was actually easier.

the layout porting was interesting ... because some of these other
(workstation) vendor pascals ... appeared to never had been used for
other than educational institution student assignments; which in one
case was complicated by the vendor had outsourced their pascal support
to organization 12 time zones away (so even tho I could easily drop in
on the workstation vendor ... there was still at least 24hr
turn-around on pascal issues).

from above:
The vulnerability resides in a feature known as the Virtual DOS Machine,
which Microsoft introduced in 1993 with Windows NT, according to this
writeup penned by Tavis Ormandy of Google. Using code written for the
VDM, an unprivileged user can inject code of his choosing directly into
the system's kernel, making it possible to make changes to highly
sensitive parts of the operating system.

Happy DEC-10 Day

Anne & Lynn Wheeler <lynn@garlic.com> writes:
the layout porting was interesting ... because some of these other
(workstation) vendor pascals ... appeared to never had been used for
other than educational institution student assignments; which in one
case was complicated by the vendor had outsourced their pascal support
to organization 12 time zones away (so even tho I could easily drop in
on the workstation vendor ... there was still at least 24hr turn-around
on pascal issues).

Happy DEC-10 Day

Anne & Lynn Wheeler <lynn@garlic.com> writes:
At the time, "routes" represented 25% of the processing load on their
res. system. As mentioned, the initial pass ran 20 times faster than
their mainframe implementation ... and for version running on rs/6000
320 ... some careful processing organization for 6000 cache sensitivity
... got another five times improvement (overall 100 times improvement).
Then redid the sequence ... so that several human interactions were
collapsed into single interaction. That single interaction then was only
about ten times faster than any of the individual original interactions.

As mentioned it could run on just about any PC or workstation with
32mbytes of real storage ... and ran either as interactive application
or configured to run client/server.

Looking at old reference from the archives, I mention that based on
rs6000/320 thruput measurements ... indicates that if moved to
rs6000/580, it would trivially handle their existing transaction load
running on muliple 3090s ... and their target expanded load requiring
several hundred ES9000 processors could be handled on ten 580s (changing
things from DBMS lookup to purely compute route finding); this involved
doing more processing (as I was demonstrating in rewritten application),
more flts, and significant increase in number of transactions.

for one of the demos, there was 4048 airports and 635820 flt segments.

say 580 possibly in range of 100mips ... then a 1000mip TREO potentially
is the equivalent of the ten 580s (theoritically full, world-wide
transaction load) ... if it had faster memory.

I had a thing about about "change of equipment". The earliest I'm
aware of is early morning TWA flt out of San Jose.

Printed OAG and reservation terminals would list non-stop and direct
flts before listing connecting flts. The airlines came up with "change
of equipment" ... a flt takes off with multiple flt numbers going to
different destinations. It lands at some point, and some passengers
have to get off for "change of equipment" (connection by any other
name). This got more of an airlines flts listed in the (top) "direct"
section (as opposed to the connecting section).

TWA used to park some planes overnight in San Jose because it was
cheaper than SFO. Early morning TWA flt out of san jose left with two
flt. numbers, one going to Seattle and one going to Kennedy.
Passengers going to Seattle would stay on the plane when it stopped in
SFO ... but passengers going to Kennedy had to get off in SFO and
change to a different plane.

There appeared to be a Contential flt Honolulu to LAX with the most
flt numbers (and most change of equipment) ... aka half dozen
Contential flts numbers all listed as departing Honolulu at the same
time and arrive LAX at the same time (and flying same model plane). Of
course today ... with alliances ... not only will same equipment might
have multiple flt. numbers for the same carrier ... but might also
have multiple flt. numbers for multiple different carriers.

One of the other issues for airline res system was that it only had
information for a limited number of connections ... for
origin/destination requiring more connections ... it had to be figured
manually. An excuse for some of the multiple flt number scheme was it
made it easier for agents to work out some of the more complex
origin/destination. My application rewrite ... would follow as many
connections as necessary in order to get between two points (and
eliminated that justification for flts with multiple flt numbers).

Other random trivia ... there was flt. in South America with largest
number of flt. segments ... i.e. flt departs first thing in the
morning ... makes more than dozen landing/take-offs during the course
of the day ... before landing that night at the same airport in
started the day from.

Pat Farrell <pfarrell@pfarrell.com> writes:
Early this century, I worked for a while at Fannie Mae. They had a small
problem with rounding of numbers. It made the front page of the
Washington Post, NY Times and Wall Street Journal.

The error was $900 million dollars. The press made out that this was a
big problem. It was really just rounding at the sums that Fannie Mae
dealt with.

The problem was more fundamental that even improper use of floating
point. They had some systems written as Excel macros. They had systems
written on ten if not twenty different kinds of hardware, and too many
kinds of software to count.

The corporate data processing folks were so far behind the curve that
the analysts just implemented what they needed locally. Including in Excel.

Also, GAO has started doing a database of executives fiddling public
company financial reports (in spite of SOX). The executives get a boost
in compensation based on the fiddled numbers. Later the financials may
be restated ... but the compensation not forfeited. One example was in
2004 Freddie was fined $400m for $10b fiddling of financials and the CEO
replaced ... but allowed to keep $60m.

jmfbahciv <jmfbahciv@aol> writes:
A perfect example why Congress is such a mess. We are getting
what we vote for.

GLBA was pretty good example of "buying" votes with special provisions
(as countermeasure to presidential veto)

folklore from the time of GLBA was that president was going to veto the
bill ... republicans easily had the votes to pass the bill, but not
override the veto. then there were provisions added to the bill that
eventually got sufficient dem votes to easily override any veto ... at
which point they passed the bill, sent it to president, and president
signed the bill (veto on bill with such a lopsided vote would have been
pointless). somewhat supported by the wiki write-up ... but
presented/phrased somewhat differently (and didn't go into lots of
detail about what provisions for which votes; not like some of the very
state-specific stuff in health bill)
http://en.wikipedia.org/wiki/Gramm-Leach-Bliley_Act

about the time that GLBA was going on ... we had been brought in to
help word smith the cal. state electronic signature legislation ..
and I've mentioned several times being tangentially involved in the
cal. state data breach notification legislation. some past posts
http://www.garlic.com/~lynn/subpubkey.html#signature

Several participants in electronic signature legislation were also
heavily involved in privacy issues. there had been detailed, in-depth
consumer surveys and the number one privacy issue was identity theft
... specifically kind resulting in fraudulent financial transactions
(account fraud) ... frequently as a result of some data breach. There
seemed to be little or no corrective action being done about the
situation; so they appeared to believe that resulting publicity from
data breach notifications would help motivate corrective action.

Besides repeal of Glass-Steagall, there was big deal that GLBA (1999
bank modernization act) was specifically going to prevent walmart (and
m'soft) from becoming banks (if you already are a bank, you get to
stay a bank; if you aren't already a bank, you don't get to become
one) as a mechanism for protecting small community banks (they may
have more to worry about from the too-big-to-fail institutions than
walmart).

Cal was also preparing an "opt-in" cal. privacy legislation ... when
"opt-out" was added to GLBA. In "opt-in", the consumer has to
specifically authorize sharing of personal information. In GLBA,
"opt-out" allows sharing unless the consumer has notified institution
that they didn't want sharing ("opt-in" was viewed as being
significantly more onerous to financial industry ... and people in
cal. viewed the addition of "opt-out" to GLBA as federal pre-emption
of their efforts; they also expressed concern about what other things
congress might do in the way of "federal pre-emption").

Later (2003 or 2004), I was at national privacy conference meeting at
Renaissance hotel in Washington DC. One of the sessions had panel of
the FTC commissioners. In the Q&A, somebody in the audience got up and
said he worked on customer callcenter software used by most of the
financial industry. He claimed he knew that most of the people in
call-centers answering 1-800 number for "opt-out", were not provided
any mechanism for recording caller information (no record was kept of
callers wanting to opt-out). He asked the FTC commissioners if they
had any intention of investigating the situation.

Happy DEC-10 Day

Natarajan Krishnaswami <natarajan+usenet@krishnaswami.org> writes:
Floating point has desirable properties for exponential processes,
such as interest, statistical calculations, and depreciation. (Things
like the decimal floating point Cowlishaw (ObAFC: Rexx!) has been
pushing), even more so.)

Happy DEC-10 Day

Pat Farrell <pfarrell@pfarrell.com> writes:
I don't think that is literally true. But it is true that all the IT
folks moved out of the fancy Washington DC office to way out near
Dulles. They had an IT staff of nearly 1000. I don't know if all of them
were employees, some were consultants.

The fancy DC building was very heavily PR and Legal folks.

There was a big revolving door, and the big wigs were very well
connected to the political powers.

the story was something about a $20,000/annum default lobbyiest
retainer for nearly everybody that had ever been anybody in the
washington area; 1000 is measly $20m/annum, 5000 is still only
$100m/annum (about the same as the CEO's compensation) ... they didn't
necessarily actually have to do anything ... just be oncall if they
were needed.

search engine history, was Happy DEC-10 Day

Michael Wojcik <mwojcik@newsguy.com> writes:
The original RS/6000 (RIOS) used a 6-chip CPU board, which was the
first commercial POWER ("Performance Optimized With Enhanced RISC",
where RISC was redefined as "Reduced Instruction Set Cycles",
according to Phil Hester).

it was also claimed that rios had a 52bit address ... this was left over
from romp claiming 40bit address ... and left over from the original
801/risc philosophy.

rios/romp/et.al had 32bit instruction addresses; top four bits mapped to
one of sixteen segment registers with low 28-bit segment offset.

801 had inverted tables and virtual addresses were segment-id
associative. In romp, the segment register was 12bits ... allowing
concurrent definition of 4096 (virtual) segments. from pre-unix
801/romp lore ... since inline application code could change
segment-id in register as easily as address in general purpose
register ... there was claim applications had a 12+28 bit effective
address space (the 12bit segment-id plus the 28bit segment offset).

moving into unix with a more traditional 32bit addressing paradigm
... with relatively statically assigned segments ... there tended to be
relatively static segment-id values to simulate a single virtual address
space.

for instance, in 370/168 ... it was address-space associative. there was
a 7-entry "STO-stack" ... and then each (virtual memory)
table-look-aside entry had a 3-bit (virtual address space) identifier.
STO is "segment table origin" address ... where there was a unique
segment table per address space. There could be an arbitrary large
number of virtual address spaces (as many segment tables that could be
built in real memory) ... but the 168 only remembered the most recent
seven.

The 801/risc architecture eliminated the hardware having to manage all
such stuff ... and pushed it off on software.

In the move from ROMP to RIOS ... even tho by then it was purely UNIX
platform ... somehow they managed to retain the pre-unix ROMP
"addressing" description ... except that RIOS now had 24bit segment
registers (which theoritically gave 24bit+28bit=52bit ... assuming the
non-unix, earlier cp.r programming paradigm).

Anne & Lynn Wheeler <lynn@garlic.com> writes:
Besides repeal of Glass-Steagall, there was big deal that GLBA (1999
bank modernization act) was specifically going to prevent walmart (and
m'soft) from becoming banks (if you already are a bank, you get to
stay a bank; if you aren't already a bank, you don't get to become
one) as a mechanism for protecting small community banks (they may
have more to worry about from the too-big-to-fail institutions than
walmart).

the issue with walmart was that walmart accounted for somewhere between
25-30% of the retail transactions in the country ... and corresponding
percentage of plastic card payment transactions. the interchange fee
taken by the financial infrastructure on those transactions are
enormous. walmart was claiming it wanted a bank charter to become its
own merchant acquiring bank ... being able to retain that portion of
payment transaction interchange fee. that possibility, easily justified
the amount of money that was poured into congress for GLBA ... some news
report last year that the financial industry got $250,000 in benefits
for every dollar they spent in contributions and lobbying
http://www.garlic.com/~lynn/2009p.html#2 Opinions on the 'Unix Haters' Handbook

Requlated portion of the too-big-to-fail US financial institutions have
gotten 40-60% of their bottom line from these fees ... and walmart being
able to retain the merchant interchange fees would be a big blow (to
those too-big-to-fail merchant acquiring institutions).

the scenario that these institutions used with the community banks, was
the notorious reputation that walmart has for efficiency and cutting fat
and overhead ... and what would happen if they did it for banking.
currently about 1/3rd of the population are unbanked ... mostly below
the profit margin for current financial institution operations. Specter
is that walmart would come in and apply its reputation for cutting fat
and overhead to banking and then be able to profitably provide financial
services to those unbanked. Once walmart had a lean & mean profitable
financial operation with 1/3rd of the country as their customer base
... it would put all the other financial institutions at a competitive
disadvantage (potentially forcing them to also transition to operation
with enormously reduced fat and overhead).

from the 60s, CMS was mainframe "personal computing" .... including some
number of commercial online timesharing service bureaus dating back to
60s with cp67/cms (much more than email). tymshare had done their online
computer conferencing on their vm-based commercial timesharing service
... and offerred it free to SHARE members (as VMSHARE) starting in
aug76, archive:
http://vm.marist.edu/~vmshare/

one of the biggest such online operations was the world-wide internal
(vm-based) HONE system ... eventually all branch office people in the
world; not long after introduction of HONE ... it became requirement
that ALL mainframe orders be processed via HONE applications
http://www.garlic.com/~lynn/subtopic.html#hone

then mid-range price/performance dropped below some threshold and 43xx
saw gigantic explosion starting in the late 70s ... similar to what DEC
saw with vax/vms.

a big differentiator between 43xx and vax/vms ... were some large
commercial customers with orders of multiple hundreds at a time (the
smaller order sizes were otherwise similar)

the change in the mid-80s was workstations and large PCs were starting
to take over that mid-range computer market (and PCs starting to subsume
CMS personal computing). the continued large volumes that endicott
expected to see for the 43xx follow-ons never materialized.

at one point, somebody from pok gave a talk in san fran ... and made
some statement about 11,000 of the vax sales should have been 43xx
(would have been good size shift ... see numbers in above post) ...
because 43xx provided better price/performance.

also, customers were finding that a vm/4341 cluster was cheaper than
3033, higher aggregate mip rate, much larger aggregate storage, and
higher aggregate i/o capacity. There is folklore, that because of the
above ... at one point, POK directed Fishkill to cut the Endicott
allocation in half for a critical component needed for 4341
manufacturing.

One of the things that was happening by the mid-70s ... as processing
power was increasing ... disk thruput improvements weren't keeping pace
with processor speed improvements. as a result, systems were having to
rely more & more on larger & larger electronic storage ... to compensate
for the growing disk i/o bottleneck. 370s were stuck with 24bit
addressing and 16mbyte virtual and real storage ... which resulted in
significant constrained operation for many 3033s.

3033 eventually came up with a hack for >16mbyte real storage, using
IDALs and slight-of-hand with two unused bits in the PTE ... although
there was still an issue with some things having to be "below the line".
One of the issues was that part of the solution involved virtual pages
that were above the line having to be moved below the line ... and they
were going to rely on IDALs to write it out to disk and then read it
back in (below the line). Old email referring to hack I gave them to do
the move w/o having to do I/O:
http://www.garlic.com/~lynn/2006t.html#email800121in this post
http://www.garlic.com/~lynn/2006t.html#15 more than 16mbyte support for 370

R.Skorupka@BREMULTIBANK.COM.PL (R.S.) writes:
I would like to see the numbers above with avg MIPS for every kind of
OS. I guess that MIPS number is much higher for TPF and z/OS and quite
low for VSE. Last but not least: I do not expect to much (any?)
customer with z/VM and no other OS. Including zLinux.

recent thread discussing rewrite of major TPF application that was
projected could grow to several hundred ES9000 ... but with the
rewrite being able to handle the growth on ten RS/6000 580s.
http://www.garlic.com/~lynn/2010b.html#80 Happy DEC-10 Day

Remember Ed Curry!

Chris Barts <chbarts+usenet@gmail.com> writes:
Who Hides the Truth About NT, and Why?:
http://cryptome.org/ed-curry.htmDate: Sun, 12 Sep 1999 18:16:01 +0400
...
Ed Curry is formerly a military man, a NSA-certified technical security
analyst, and a former independent contractor for the Microsoft
Corporation, who for several years has been saying that the Pentagon and
other US government agencies have violated their own security rules by
purchasing mass quantities of a non-secure computer operating system,
Windows NT.
...
It's getting hard to find information about Curry online and I'm pretty
sure it's impossible to find it anywhere else by now. His investigation
and complaints didn't go anywhere, after all. This just demonstrates how
right he was, even if he didn't see this specific threat at the time.

Happy DEC-10 Day

Morten Reistad <first@last.name> writes:
This is the point about "developing the Internet". We need an interactive,
public scripting language that can exist in a validated sandbox, and
that can interace, full-duplex, with the user _and_ with the service
provider. The clues here are public script language, interactive,
full-duplex and a sandbox that also includes a server. These four
attributes make this very distinct from an http/https based service.

some drift to my wife's arguments with SNA group when she was con'ed
into going to POK to be in charge of mainfame loosely-coupled (cluster)
architecture ... and SNA group constantly insisting that she had to use
(half-duplex) SNA for loosely-coupled coordination. There was
(temporary) truce that she could use whatever she wanted within the
boundaries of of the datacenter walls ... but SNA still had to be used
for anything that crossed the datacenter boundary.

a couple years later, research was doing vm/4341 cluster operation and
was using a full-duplex broadcast protocol for cluster coordination (for
various cluster-wide operations, it was small subsecond elapsed time).
However, to ship vm/4341 cluster support to customers ... they were
forced to move to (half-duplex) SNA ... and things that had been taking
small subsecond elapsed time were all of sudden taking greater than 30
seconds (even tho the hardware was still identical).

Happy DEC-10 Day

Eric Chomko <pne.chomko@comcast.net> writes:
Clearly having microcomputers in the business world was significant,
but it would NEVER become what it is today unless people bought them
for their homes as well. The reason is that the advent of PCs in the
home drove down the price due to an increase in demand. Sure IBM would
have loved charging more for their PCs and kept them out of homes and
at the office, but people would have a choice to have a home computer
with business apps and tell IBM what they could do with their high-
priced computers. The clone market sort of headed that whole thing
off, and traditional home computers started to add business software
to their computers as well.

i would claim that there was chicken&egg ... the hobbiest stuff had
difficulty in the home market. the volumes that PC got from business
market ... especially since it was about the same price as 3270 and
businesses buying tens of thousands of 3270s could get PC for about the
same price and do both 3270 and some local computing (negligible
incremental business justification for PC if 3270 terminal was already
justified). that resulted a big jump in volume ... and attracted a lot
of developers (because of the volume/install base).

in the early 80s, my brother was regional sales for apple (claimed
largest physical region in conus) and would periodically come to town
... and I would be invited to after-work business dinners. I got to
argue with some mac developers (before it was announced) whether or not
mac needed terminal/3270-emulation feature in order to get the volumes
going.

search engine history, was Happy DEC-10 Day

Jim Stewart <jstewart@jkmicro.com> writes:
MIPS processors were used in Tivo series 1 and 2,
running Linux. One of the first mainstreet embedded
Linux products and one that exposed a bunch of
casual hackers to the architecture.

then moved over to head-up somerset (motorola, ibm, apple, et al to do
power/pc); after a stint there ... then about the time we went back to
san jose ... he took the job as president of mips. all the executives
got a personal indy ... so I offerred to order it for him and take it
home and configure it ... it stayed home until he left that job (and
I had to turn it back in)

Oldest Instruction Set still in daily use?

jmfbahciv <jmfbahciv@aol> writes:
yes, Hiding. There was a big stink when Pelosi insisted that
the meetings to iron out the differences between the two bills
be done in closed sessions. Nobody knows what that new bill
looks like. It's also not in the news because nobody has
anything to report.

Ultimately, this is the downside of spam laws that codify an opt-out
regime. As I noted in November, most of the rest of the world requires
that marketers first get a user's permission. The gold standard laws are
the ones that also specify the permission be 'informed' -- i.e., the
user's not being tricked into giving permission and has sufficient
information to make a choice.

ps2os2@YAHOO.COM (Ed Gould) writes:
Now this goes back aways and my memory is not 100 percent but...

There are probably some of you here that remember when the White House
(in the 70's-80's) lost a lot of email from around the time of
Watergate.

I had a friend who was an IBMer working at the White House in that
time frame. He was involved in trying to get back all the lost
emails. My memory is iffy here but when I was talking with my friend
he was telling me how exhaustive IBM worked at getting back the
emails. They had quite a few factory types working on getting them
back. They were not particularly successful because of data getting
written over and recovery was at best spotty.

I am pretty sure they ran some type of PROFs (it was VM based that is
all I remember) and he got somewhat familiar with reading track dumps
and also whiz bang (he never gave me specifics) way of reading what
was underneath what was current on the track.

He is long time retired and is enjoying a well deserved retirement so
I hope he doesn't get in any trouble for anything I am writing here.

Although overwriting can be used for clearing this media, the method is
time consuming and generally never used. Also, inter-record gaps may
preclude proper clearing. A better method for clearing Type I, II, and
III tapes is degaussing with a Type I or Type II degausser. This
procedure is considered acceptable for clearing, but not purging, all
types of tapes.

Degaussing with an appropriate degausser is the only method the DoD
accepts for purging this media. Specifically, a Type I degausser can
purge only Type I tapes, and Type II degaussers can purge Types I and II
tapes. No degausser presently exists that is capable of purging Type III
tapes in accordance with NSA/CSS Specification L14-4-A.

R.Skorupka@BREMULTIBANK.COM.PL (R.S.) writes:
Well... I remember Ronald Reagan, I know what White House is, but
neither I know Ollie North, nor I know what is PROFS and what is
relationship of VM to Ronald Reagan.

not a lot of employees got direct access to VMSHARE ... so I set up
processes that TYMSHARE would regularly send me complete dump of all
VMSHARE (and later, PCSHARE) files ... and I would put them up on
several internal systems, including HONE. Another example, somebody from
branch office in Kuwait sending me email (regarding VMSHARE info)
http://www.garlic.com/~lynn/2007b.html#email830227

Paris-La Defense was one of the first overseas system installations I
was involved in, when EMEA hdqtrs moved from the states in the early 70s
(back then, overseas links weren't so pervasive, so it was lot harder to
figure out how to logon back home to read email).

bbreynolds <bbreynolds@aol.com> writes:
Was that a component which was shared by the 3033? Something
unique to the the 4341? I know that IBM's internal politics were
sometimes off the wall, but that folklore seems extreme.

While I was in Endicott, I think I talked them into putting me on
the distribution list for VM functional spec. documents. After the
visit to Endicott, we went by Cornell Univ. for an afternoon/evening.
They had a number of interesting things to say. We have talked before
about doing a joint study with them on their mini-disk manager. They
finally asked xxxxx about it at the last Share meeting. He hemmed and
hawed around for a long time not sounding very hopeful and finally
said any such undertaking has to be approved by YYYYY. MIT Prof was
also there giving a seminar for a week or 2. They had a funny story to
tell. On the 1st day MIT Prof had some not very complimentary things to
say about Cornell's comp. science department. They took him aside at
lunch and told him that wasn't exactly the correct thing to do. He
apparently held his tongue for a whole week. Finally he had the
opportunity to state that if all computers at Cornell were destroyed
the computer science department would never know about it.

After Cornell we went by Kingston and then POK. In both Endicott
and POK had some very interesting discussions about confidential stuff
that is going on. In Endicott especially, there was even a hardware
modification design session which I think we work some stuff
out. Finally found out what head-of-POK was going to do about the
4341. I all along thot he would force Endicott into slowing the
machine down. I guess he couldn't come with a way. He did come up with
something that is probably even more effective tho. He somehow
arraigned for the East Fishkill plant to cut their hardware output
allocation to Endicott in half. There were comments that head-of-POK
was called several choice names. Endicott still may win tho.

tcp/ip is the technology basis for the modern internet but NSFNET
backbone was the operational basis for the modern internet (and CIX was
the business basis for the modern internet).

The director of NSF attempted to help out writing a letter to couple
people in the corporation (copying the CEO), but that just aggravated
the internal politics ... recent reference:
http://www.garlic.com/~lynn/2009q.html#42 The 50th Anniversary of the Legendary IBM 1401

there seems to be some hiccup with this recent post (I did twice)
between the mailing list and usenet (missing on usenet, but I finally
checked mailing list archive; I normally read on usenet, but post to
the mailing list).
http://www.garlic.com/~lynn/2010b.html#99 "The Naked Mainframe" (Forbes Security Article)

One more thing about Endicott, their datacenter production VM
system is so backlevel, they asked about SJRL's VM system. There were
tentative plans made for some Endicott people to come out to SJRL and
pick up our floor system for installation in Endicott. Some of this in
light of the hardware error recovery that we have been adding,
especially in response to the problems in the DASD engineering labs
but also to normal problems we have here.

they had been running (pre-scheduled) stand-alone testing of
engineering/development hardware (several dedicated, stand-alone
370s). At one time they had tried to use MVS in the environment, but
had experienced 15min MTBF. I undertook to completely rewrite i/o
supervisor so that it would never fail and they could concurrently
test several devices (on demand, instead of the around-the-clock
pre-scheduled, stand-alone testing that they had been doing).

Mentioning the 15min MTBF in a purely internal-only report, brought the
wrath of the MVS group down on my head ... but seems small in comparison
to the effort to cut allocation to endicott for building 4341s.

in any case, endicott datacenter people never came out, i think
somebody got around to how would it look if endicott datacenter was
running one of my vm systems.

there was mad rush to get stuff (hardware & software) back into the
370 product pipeline ... as well as getting around to kicking off XA
effort (eventually "811" from the date on the hardware architecture
documents). POK also made the case to corporate that in order to make
the mvs/xa ship schedule, had to kill vm370, shutdown the vm370
development group and move all the people to POK ... a couple
recent posts:
http://www.garlic.com/~lynn/2010b.html#37 Happy DEC-10 Day
http://www.garlic.com/~lynn/2010b.html#44 sysout using machine control instead of ANSI control

Endicott eventually managed to save the vm370 product mission (seeing
the leading edge of what was to become the vm midrange explosion), but
had to reconstitute a group from scratch. By the time of the above
email, they were still ramping up.