Some tokens are just memory cards, they will export the private key outside
the token and do the cryptographic operations in an insecure environment.

Some tokens protect the cryptographic operations, but ignore protecting the
PIN and the display.

FINREAD address this, but the technology isn't really mature yet. When I
read the specs., I couldn't even see that the minimum size of the display
was specified... Grrr.

I've been playing with E4-high chips in various configurations for ECC
digital signatures ... as well as being at the most recent FINREAD
conference ... and using some terminals that exceed FINREAD standard
(whatever they are).

one of the issues looked at early in the X9.59 secure electronic
payment standard work (for all account based payments ... regardless
of type, credit, debit, point-of-sale, internet, etc) was not only is
a secure terminal a necessity ... but there needs to be proof that a
secure terminal was, in fact, used (in much the same way that there
was proof that a secure hardware token, was in fact, used). this led
to formulation in the x9a10 standards work group that the secure
terminal needed to sign every transaction (in addition to the secure
hardware token) ... in order to proove that such a terminal was used
(not just mandate that they exist, but also proove that they were
used).

there still exists the possible exploit that the hardware token was
used at some non-finread terminal at some time and the PIN-exposed.
However, subsequent finread terminal use would preclude trojen-horse
providing previously compromised PIN automagically to the hardware
token w/o the owner's knowledge ... the hardware token would have to
be in the hands of somebody fraudulently using the token.

The issue then is while the relying party can proove whether or not a
finread-like (or better) terminal was used (with the terminal also
signing the message) ... it is more difficult for the token owner to
know whether they are actually dealing with a finread-secure terminal.

"Tor Rustad" writes:
The main problem with this type of autentication, is that it does not
provide intregrity of an important message (e.g. transaction). MITM can
alter the message without being detected.

An other thing with symmetric key systems, is risk management in the case
one secret is broken/known, its important that not the whole system breaks
down by this. Such systems can be build, and in real life we have POS and
ATM security which uses mainly symmertric key cryptography. These real-life
systems take care of both autentication and integrity, and in some cases
confidensiality by only using symmetric keys.

The main advantages with asymmetric key systems, is for off-line processing
and the less demand on number of keys (don't need one key per zone).

shared-secret biometric systems have similar type of problems to
shared-secret pin/password/symmetric-key systems ... with one
additional disturbing aspect; if shared-secret
pin/password/symmetric-key gets compromised, it is possible to do some
remediation and issue new pin/password/symmetric-key. Given current
state of the art, it is somewhat more difficult to issue a new
fingerprint in the case of an electronic fingerprint pattern being
compromised.

Eric Smith <eric-no-spam-for-me@brouhaha.com> writes:
From personal conversations with some of the engineers and managers
involved, it is definitely the case that some of the company thought
that the 432 was either not going to succeed, or not going to be
competitive at the low-end to midrange market, and those people expected
the 8086 to fill that market. Others thought that the 432 would
succeed, but that it would take a while, and they needed to stay
competitive until the 432 displaced things, so they viewed the 8086 as
a stopgap measure.

a late '70s? sigops at asilomar there was talk on 432 ... big
unresolved issue was how to apply fixes ... the "code/algorithms/ideas"
that had been sunk into silicon was significant ... as were the
associated bugs that were being found. unresolved 432 issue was how
to fix silicon (software downloads didn't work) ...

Minimalist design (was Re: Parity - why even or odd)

jmfbahciv writes:
There is something you've overlooked. Take a look at any TOPS-10
SMP implementation; it will be version 7.nn. Actually take
a look at any PDP-10 implementation especially TOPS-10 or
TOPS-20. They knew how to take prioritized interrupts, do
the minimal necessary and be ready for the next one.

There was a trade-off in CP/67 and the revision of the "CP" kernel for
VM/370. In CP/67, it would handle I/O interrupt up-thru "channel
level" processing (i.e. things specific to channel level operation,
i/o operations tended to be viewed as hierarchy: channel, control
unit, device) and then flag things clear and be ready for next
interrupt (and/or another processor could do some things). In VM/370,
processing went thru until device-level processing was complete
... there were some increased CPU pathlength overhead issues in CP/67
allowing the earlier processing (namely the processing emboddied in
the "CHFREE" macro ... aka channel free ... which disappeared in
VM/370).

cruff@ucar.edu (Craig Ruff) writes:
I have no direct experience with the Cray 2 systems, but our Cray 1,
Cray X-MP, Cray Y-MP and Cray C90 systems all used chiled water to
cool the systems indirectly via a heat exchanger to a chilled freon
loop.

the large 370s, 308x, 3090, etc used closed (distilled) water inner loop
with heat exchange to external chilled water cooler.

there was a thermal sensor on the inner loop ... however, at one
customer they lost flow on the external chilled water side ... and by
the time the (internal) thermal sensor was tripped ... there was enuf
latent heat and so little reserve capacity in the inner loop that the
machine fried.

Subsequently, flow sensors on the external chilled water side were
installed.

James Johnson writes:
Pure MIPs have never really been the basis for deciding if it is a mainframe. A
good interpretation would be a fair amount of processing power combined with a
hugely large I/O, or in another words, its moves a lot!!!!!!!!!! of bits.
Supercomputers far outstrip mainframes as far as computational power in
calculation intensive operations, but they don't have the I/O bandwith of a
business mainframe.

actually a lot of "cluster" supercomputers started to significantly
exceed mainframe i/o capacity by the early '90s ... however they
tended to be configured for enormous amounts of disk sequential
transfer as well as enormous amounts of low-latency inter-processor
communication (i.e. in some cases tens of giga-byte channels, not
giga-bit; large numbers 32+8 disk arrays using 10mbyte/sec disks for
individual 320mbyte/sec data transfers, etc).

but notice that nothing prevented them for being configured with lots
of disks for individual transaction processing as opposed to parallel
sequential processing ... in fact, lots of mainframes now have adopted
such installations (i.e. configured with lots of small disks in
various kinds of mirroring &/or raid configurations ... something
picked up from this other market segment).

Is VeriSign lying???

Bernd Eckenfels writes:
This depends on the level of verification they offer. In case of Class 1
Certificates (or "Free" or "E-Mail based" or however you call it), all it
takes is a reply-able E-Mail address. Since most other certificates which
have a paper baes process are not more secure (I dont think a american or
south africa based company is able to verifiy the exisitence of a german
company by receiving some faxes), I think the process with the E-Mail or a
Web of Trust is good enough mor most purpose. As a comapny doing serious
business wth SSL you have to send the Fingerprint to your custoemrs anyway.

typically it means that for the type of information in the certificate
... they have checked with the authoritative agency(s) for that
particular type of information ... and then effectively certify that
the information in the certificate passed that particular agency
check.

for SSL domain name certificates ... it means that they checked with
the authoritative agency for domain name ownership ... the domain name
infrastructure.

note that the basic justification for SSL domain name certificates
boils down to integrity questions regarding the domain name
infrastructure ... the very same infrastructure that is the
authoritative agency for domain name ownership questions (which the
certification authorities have to check with, with regard to
certifying domain name ownership).

... for what ever information that is being certified in a
certificate, the level of trust is dependent on 1) the process used to
check with the authoritative agency responsible for that information
and 2) the process used by that authoritative agency for accurately
keeping that information (aka frequently the certification authority,
certifying and manufacturing certificates ... is not the same as the
agency responsible as the final authority regarding the accuracy of
the information being certificates).

for instance, identity theft where valid driver licenses and other
documents are obtained ... undermines the ability in what a
certificate authority can do in the certification process where
"identity" is certified (aka traditional x.509 certificates are
nominally couched in terms of identity certificates). that is totally
aside from the issue of identity certificates representing a
significant privacy problem.

"del cecchi" writes:
The TCM was the Thermal Conduction Module in which aluminum pistons
contacted the chips which were mounted face down flip chip to a
multilayer ceramic substrate. It was developed as a replacement for the
LEM Liquid Encapsulated Module in which the module was filled with a
coolant fluid. The LEM proved to not be reliable.

mirian@trantor.cosmic.com (Mirian Crzig Lennox) writes:
None of those instructions is MP-atomic per se. What I think you may be
referring to is the LOCK prefix. It's a kludge from a hardware design
standpoint, because the x86 was not really architected with MP in mind,
and the LOCK prefix (which came about as late as the 486, IIRC), just
engages an external mechanism not actually specified in the x86
architecture. Also, they make your code non-portable to 386-and-before,
which means realistically you need separate binaries for MP and non-MP.
In contrast, the VAX interlocked instructions have rigidly specified
behavior as part of the CPU architecture, which is guaranteed to work
consistenly in both UP and MP modes, and on any VAX.

as an outgrowth of the 360 test&set (TS) atomic instruction for MP
operation, Charlie invented atomic compare&swap (aka CAS are charlie's
inititals). The architecture/POP group in POK (namely Padegs & Ron
Smith) said to get it into 370 architecture, it needed to have a
non-MP programming paradigm defined for it ... giving rise to the POP
programming notes for C&S operation in non-MP world (serializing
multi-threaded, not necessarily MP, applications).

370 also had some privileged instructions that were also defined for
serialized operation in MP environment ... namely PTLB, IPTE, ISTE, &
ISTO (for managing virtual memory tables). When 370m165 engineers said
that it would take an extra six months to design/build support for
IPTE, ISTE, & ISTO for virtual memory hardware retro-fit to 165, all
but PTLB was dropped.

later for the 3033, the IPTE selective invalidate was (re-)introduced.

for aix (rios/power, non-mp & power/pc) defined a C&S macro for
uniprocessor operation was defined ... however, this just generated an
svc interrupt and executed some "disabled" code in the FLIH that
simulated a C&S instruction. This was to support non-kernal,
multi-threaded application serizlization operation (i.e. the original
stuff that was done in cambridge for the POP programming notes to get
C&S accepted for 370).

jeffreyb@gwu.edu (Jeffrey Boulier) writes:
HP does, or did resell Stratus' fault tolerant boxes, which probably
explains the confusion. Stratus had moved from the i860 to the PA-RISC. It
sells their own HP-UX systems, along with their proprietary VOS (which has
some similarities to Multics), FTX, a fault tolerant Unix (well, it used
to sell them; I think FTX is now dead), and lately Windows NT/2000
systems.

wasn't HP also trying to sell sequoia ft in the early '90s. ibm was
marketing s/88 during this period ... a logo'ed stratus.

"Bill Todd" writes:
But they don't in general offer the same level of reliability, because
they don't guard against undetected hardware faults that allow a node to
continue running but produce incorrect results. The Stratus and Tandem
approaches, which compare the results generated by replicated hardware
before allowing them to become externally visible, do protect against such
faults.

however, at the time we were doing ha/cmp and talking to the people at
1-800 number system, stratus (& s/88) boxes still required scheduled
system downtime for system maint. ... and the 1-800 number system is
spec'ed at 5-nines availability (which stratus/s/88 couldn't meet
... one system maint. period exceeded several years allowed
downtime).

the status/s/88 approach was then to cluster (i.e. two) the machines
... but then there was essentially no measurable availability
difference between a stratus cluster & a ha/cmp cluster ... but there
was significant cost difference.

HP-UX will not be ported to Alpha (no surprise)exit

jeffreyb@gwu.edu (Jeffrey Boulier) writes:
Forgot about the s/88. I've heard about them, but know next to nothing.
What operating system did they run?

it was a straight stratus system that had been logo'ed by ibm and called the s/88.

i believe there were some issues with stratus (& other) salesmen
competing with ibm salesman for the same customers with essentially
the same machine ... and then whether it was a "stratus" machine that
went in or a "s/88" machine that went in and which salesmen got
credit.

Terje Mathisen writes:
Not only was LOCK part of the original x86 architecture, the XCHG
reg,[mem] opcode was specified to have an implied LOCK prefix, i.e.
there was no way to use that opcode in a non-atomic manner, even on cpus
that didn't support SMP.

It could still be useful in a system which had DMA or some other form of
assymetric MP.

note that one of the uses of the atomic compare&swap work (from the
late '60s and early '70s resulting in CAS going into system/370) was
to support multi-threaded, pre-emptable application code (i.e. enabled
for interrupts, non-kernal, etc). the MP barrier semantics from the
'60s (and earlier?) with test & set type instruction had non-atomic
type operations (set barrier, do whatever, clear barrier) which
implied non-pre-emptable execution (unless you were very, very, very
careful).

"Bill Todd" writes:
Triple-modular redundancy ('TMR' - the Tandem 'Integrity' platform approach,
which is operationally similar to the 'pair and spare' quad-redundant
approach used by Stratus) really has little in common with VMS clustering,
HSC or otherwise.

I think that as early as the mid-80s there was something about 90-95+
percent of system outages had nothing at all to do with hardware
failures ... aka non-FT hardware (except maybe in the PC market) was
becoming significantly more robust (fault tolerant?).

issues were becoming things like scheduled downtime, software
failures, operator mistakes, disaster&geographic survivability (terms
we had coined when we were doing ha/cmp).

to repeat a quote from the above ... one of the large financial settlement
infrastructures credited the two primary things contributing to them having
100 percent availability for the last six years were

that should have been automated operator (contributing significantly
to 100 percent availability for the previous six years) ... but in
general, implies automated operations.

from past threads ... & slightly related ... that batch paradigm
derived platforms and interactive/online paradigm derived platforms
tend to have some amount of different perspective.

interactive/online paradigm derived platforms frequently assume that
the computer/program/application is interacting with a human and
involve implementations based on that assumption.

batch paradigm derived platforms tend to not assume that humans are
involved and also tend to have evolved much more sophisticated
infrastructure for automagically dealing with exceptions and
anomolies.

I remember trying to deploy some production web-oriented platforms in
the '95/'96 time-frame and having to deal with little things like when
space was exausted standard svid unix sort (i.e. interactive paradigm
derived platform) just continued with output of (truncated) data that
it was able to process. there was no obvious programming paradigm to
automagically recognized and recover from filespace full scenerio
(that is frequently part of many mainframe production operations).

"Stephen Fuld" writes:
Jim Gray, when he was at Tandem, did a study on the causes of downtime. HW
failure was way down the list. I don't know if the paper is online
anywhere, but I think I have a paper copy I can try to dig out if people are
interested.

hack@watson.ibm.com (hack) writes:
The project was stopped, unfortunately. I can imagine no-downtime software
upgrades by building a new checkpoint boot image offline (for a new AIX), and
by maintaining a compatible application checkpoint structure when rolling in
a new version of the primary application.

i once did something similar but at the application level for airline
res application .. i.e. prebuilt a large part of the image AND also
did rolling cluster cut-over (one of the ten "impossible" things that
was suppose to be addressed was existing several hour cut-over time).

common term (at least in mainframe) has been RAS ... reliability,
availability, and serviceability. As RAS of core technology has
significantly improved over the last 30-40 years ... attention to
outages has shifted to monitoring, service level aggreements,
geographic disaster survivability (i.e. replicated clustering at
geographic distances) and people mistakes (aka automated operator &
operations).

in the area of monitoring & SLAs ... all errors and outages are
actually monitored in detail, (industry) reports generated, and
contracts based on such features are standard.

One indication of whether something is interesting technology RAS
feature and/or really part of nuts & bolts business is whether there
is industry-wide monitoring and reports of RAS information (as well as
people paying serious attention to the reports).

One example I know about involved some software I wrote once. One of
the new mainframes had been out for a year ... and there is this
industry wide service that gathers from customers the RAS/(LOGREC)
files and publishes reports. For this new mainframe they expected that
there would be something like 3-5 total errors of a particular kind
across all machines for all customers over a period of a year. The
industry reports showed that there was in fact a total of 15 errors of
this particular kind across all machines for all customers for a
period of a year (not per machine & not per customer ... all
machines at all customers).

Turns out that sometime in the past I had written some software
simulation support for doing "channel I/O extension" of mainframe I/O
over telco links. When certain types of uncorrected telco transmission
errors occured, the software simulation would emulate this particular
kind of error. They were able to track down some customers that were
running channel I/O extension software and account for the extra 10-12
errors that had shown up in the industry reports. Also, the software
simulation code was changed to report a different kind of emulated
error condition.

schaef@io.com (MSCHAEF.COM) writes:
as it apparantly did. For two largely one-product companies they apparantly didn't
think through their approach to Windows very well. Some form of Windows contingancy
plan would have been nice, especially considering the money they were both sinking
into Unix, Macintosh, NeXTStep, etc.

OT: almost lost LBJ tapes; Dictabelt

jcmorris@mitre.org (Joe Morris) writes:
One of the points that we stressed to budding speakers was that handouts
are almost always a Good Thing, and one of the reasons for that is exactly
what your lecturer recognized: if the listener already has in hand printed
material that contains the structure of the presentation as well as all of
the critical information (including equations, tabular data, and the like),
then the audience can actually think about what is being said and needs
to write only brief notes to flesh out what's printed. (Of course, this
assumes that the handouts are available before the presentation begins.
The value is far less if they are handed out only at the end of the
session.)

at the end of the hr, i had finished 2-3 overheads ... so they
scheduled a room right off the ballroom for BOF (birds of feather) to
finish the talk. The evening ballroom was typically where SCIDS
(society for continuous inebriation during share) was held (part of
the reason for SCIDS function dated from TJW when alcohol wasn't a
permitted corporate activity, and you couldn't turn in travel expenses
for alcohol ... so the SCIDS event was open bar covered under the
general SHARE registration fee). Anyway, the talk lasted from 6pm to
12pm ... with several intermissions for periodic refreshments out in
the main ballfoom (which i thot contributed to the general quality of
the talk).

JO.Skip.Robinson@SCE.COM (Skip Robinson) writes:
The main problem seems to be the synchronous nature of ESCON protocol. Only
one I/O to a control unit can be active on an ESCON CHPID at one time. If
CHPID activity is delayed because of distance--that old speed of light
thing--the next I/O is held up until a response is received from the
previous one. This makes the CHPID look 100% busy even though the actual
data flowing across it is only a tiny fraction of 'native' capacity.

FICON protocol on the other hand is asynchronous, allowing multiple I/Os to
flow down the pike concurrently. Hence the response time for each I/O
should not take such a toll on throughput. This is still conjectural on our
part.

there was a lot of heat & churn in 90/91 time-frame in the FCS
standards process ... which might be described as trying to define a
half-duplex ESCON emulation for fiber channel.

doing asynchronous channel extension runs into some misc. issues. I had
written software support for asynchronous channel extension in 1981 (20
years ago) ... which allowed STL to remote a couple hundred people in
the IMS support group. there was actually more of a problem with
speed-matching and collisions ... trying to tunnel channels thru telco
T1/1.544mbit/sec link ... which became severely aggravated by allowing
a large number of simulataneous asynchronous channel operations (needed
to add smarts about collision management & recovery). fiber channel
should be much less of a challenge (since the physical bandwidth is
significantly larger rather than significantly smaller).

Richard Drushel writes:
In 1994, when it seemed that everyone in our ADAM community
wanted to have a hard drive system or an ADAMnet floppy drive bigger
than the original 160K, yet balked at paying $150-300 to the last
remaining ADAM hardware vendor, I had an idea:

Everybody was dumping XTs and ATs for 486s and early Pentiums.
Even the lowly XTs had a 20MB hard drive, 360K floppy, and serial/parallel
porst. There were plenty of ADAM serial boards around, both new and
used, for $35 or less. Why not create a serial link between an ADAM
and a PC, and let the ADAM use the PC hardware? The PC runs a server
and listens for I/O requests. The ADAM EOS operating system is patched
to reroute I/O requests for ADAMnet devices to the server. So long as
the EOS function calls are handled properly (i.e., correct error codes,
exit flags, updated internal data structures), the user application
will never care that it's not talking to a genuine ADAMnet device.
Voila, instant ADAM serial ports, parallel ports, floppy and hard drives,
plus maybe even (some day) access to the PC graphics screen, real-time
clock, etc. All using PC hardware you already had and were going to
junk...just the price of a serial cable, null modem, and ADAM serial
board if you didn't have one already. I called it ADAMserve.

effectively that was what cp88 did 10 years earlier ... in support of
xt/370 (i.e. the 370 card using the xt as a server using xt devices
instead of 370 devices).

Steve O'Hara-Smith writes:
Pick an app that you want to write, write it without a UI and just
a socket interface (or similar) and document it as a published API of the
app. Now write a UI that goes between the user and this API. You have just
done what I have been advocating, your app is now ready to have multiple
UIs and even to be integrated into other applications smoothly.

collapsed three of the above type queries into a single transaction
and then had a couple UI front-ends.

one was command line similar to the original and just listed the
information

another was a client GUI app which would put up a map and a list of
the routes that satisfied the query. locally on the client, a person
could sort the list by departure time, arrival time, elapsed travel
time, most airline points (some people, possible on business travel,
attempted to maximize their airline miles). Highlighting a specific
flight would draw the route on the map. It also had an option to
download from the web, the latest "weather" map ... so the route was
drawn over weather patterns.

Jon Tveten writes:
I think I am the only "thing" here able to read punched cards. :)

--
Jon Tveten
A Norwegian in Australia

the holes in the card or printed text across the top?

when i was in school, i had a 2000 card assembler program that took
30-60 minutes to assemble and produce a TXT deck (depending on whether
i used DCB macros or used my own SIO and device drivers). I soon found
that it was frequently faster to PATCH the TXT deck by finding the
appropriate TXT card and DUP'ing it in an 026, applying the changes by
using multi-punch on the 026 (somewhat similar to C-q in emacs)
... aka the (TXT) cards punched by the 2540 didn't having any printing
across the top (and there wasn't any symbols for the most of the
punched values in any case).

gah@ugcs.caltech.edu (glen herrmannsfeldt) writes:
I thought the way it was done was to punch additional TXT cards and
place them at the end. If a later TXT card had an address that
overlapped it would write over the already loaded data. This way
they could be easily changed without removing the original cards.

Also, many programs would supply a patch area near the end, so that
there was a place to add new instructions.

there was a "REP" card/statement that could go at the end of the deck
that could be punched in "character" hex (as opposed to "binary" hex).

the REP card argument specified displacement in the deck and the data
to be inserted/replaced. REP cards could refer to any displacement in
the deck ... a common batch process might involve replacing a 4byte
instruction with a branch & link instruction to a dummy data patch
area (frequently defined at the end of the program) with new instructions
then inserted in the patch area.

I started using the multi-punch process (and learned to read the holes
in the cards) before I ran across any documentation referring to
REP cards.

mschaef@eris.io.com (MSCHAEF.COM) writes:
Okay. Can you run an OpenVMS binary compiled for a VAX machine
(11/780 or whatever) on a Alpha machine running OpenVMS? I know
that IBM offered, at various times, emulation systems for running
older binaries on newer machines. Didn't they do something
like this for the System/360? Did Digital do anything similar
during the transition to Alpha?

the 360s for the most part were microcoded machines ... where native
(micro-)code running on the native engines provided support for the
360 architecture (and the different 360 processor models tended to
have different native engine architecture and different microcode).

several of the 360 models had provisions for installed (& having
installed) microcode that support earlier ibm machines like 1401,
7090/7094, etc. and a switch on the front panel that selected whether
the machine was operating in 360 personality or 1401 (7090, whatever)
personality. Not only did the binaries for eariler architectures run
the machines ran the operating system (and/or monitor) for the earlier
machines.

there were also some packages that provided (at least) 1401 simulation
on 360 ... i.e. the ability to execute a 1401 application binary
within a 360 application providing 1401 simulation.

there was a cp/67 project that also provided the reverse ... that
simulated 370 virtual machines on 360/67 running cp/67. This wasn't a
major effort since for the most part, 370 was a superset of 360 and it
required support in cp/67 kernel for simulation of the new 370
instructions. The somewhat exception was the virtual memory and
control register structure was different on 370s than it had been on
cp/67 so there was quite a bit more simulation work that had to be
done in the cp/67 kernel for that part of the 370 architecture.

one of the stories is that the full 370 simulation was running for a
year (as well as a version of cp/67 that was modified on 370 virtual
memory architecture rather than 360/67 virtual memory architecture)
before the first hardware engineering 370 relocate (virtual memory)
machine was built. So when the engineers asked for a copy of the
370'ized cp/67 as a test case to boot on the first engineering
hardware (to give some idea the level of "engineering" ... there
wasn't a boot/IPL button ... to boot/IPL machine, a knife switch was
used).

In any case, the 370'ised CP/67 was booted/ipled on the machine and
very shortly failed. After some diagnostic, it was determined that the
engineers had implemented something wrong ... so the 370'ised CP/67
was quickly patched to correspond to the incorrect hardware
implementation and testing then proceeded.

a couple days ago somebody sent me a question on 3270 protocol which I
didn't know the answer to ... but it jog'ed some memory cells that
i've been trying to remember what were the terms used for the
3272/3277 and 3274/3278/9 protocol. I have some vague recollection
that one of the terms was CUT and may refer to the 3272/3277 protocol
... but I can't remember the other term (and/or even sure CUT is one
of the terms).

sarr@engin.umich.edu (Sarr J. Blumson) writes:
I thought this was pretty much routine in the early to mid 60s. I don't think
_any_ of the DTSS software ever had a clean assembly, we patched (octal in
this case) the binary and ran that way. A reassembly took a couple of hours
and you always (because you would "fix" other things you noticed) the
editing process would never converge anyway.

the standard process was to use REP cards ... I had just taken my
first programming course, a 2hr introduction to Fortran programming
class and then they gave me this summer job to port 1401 MPIO to 360
(so 360/30 could act as front end to the 709) ... that eventually
became my 2000 assembler card program ... and all i got was an
assembler manual, 360 pop manual and couple other pieces; but nothing
that documented REP cards.

I did get the machine room and all the facilities dedicated from 8am
sat. until 8am mon. (which continued in the fall for other projects,
but made it a little hard to go to mon morning class). I eventually
reverse engineered TXT cards and figured out how to repunch a binary
TXT card on a "character" keypunch (and only later got some
documentation about REP, TXT, ESD, END, RLD, etc cards).

While patch areas could be used with REP cards, I think (360) patch
areas became somewhat more common with load modules and superzap. The
TXT decks had already gone thru the linkage editor and the result was
stored on disk. superzap could read a load module, verify/replace
bytes and write out the updated load module.

Private key

"Edward A. Feustel" writes:
In fact, when A and B copy "their" certificates on to their machine,
the private key comes with them. It is in an implementation dependent
addition to the X.509 certificate. Alternatively, A and B can generate
their key pair and send their public keys to a Certificate Authority
to manufacture and sign the public key certificates and return a copy
to A, B, and who ever might be holding the public key certificates
in a data base.

Ed

another simple example is PGP. Each user generates their own
public/private key pair and (effectively) their own certificate. Users
will distribute their own certificates (with their public key) to
whoever.

PGP puts the private key in the private key file that is encrypted
with password/passphrase. Public key "certificates" (both their own
and others) go into an unencrypted public key file.

A "signed" message is sent by encrypting the HASH of the message with
the person's private key and appending the signature to the message.
Recipients of the message can verify the signature if they have the
sender's public key certificate in their public key file.

An "encrypted" message can be sent if the sender has the recipient's
public key. A random secret key is generated and the message is
encrypted. The random secret key is then encrypted with the
recipient's public key and added to the message.

The recipient can decrypt the message if the random secret key has
been encrypted with their public key.

Certification Authorities (CAs) have added another layer of complexity
to this. Certification Authorities (CAs) distribute their public key
certificates to lots of people. Then CAs generate specially signed
messages called certificates that attest to the binding of some
characteristic (like a person's name or a domain name like in the case
of SSL) to a public key. They sign this special certificate/message
with their own public key.

Now rather than a sender having had to previously distributed their
prublic key "certificate" via some mechanism ... a sender can now
append their "CA" certificate to the end of each message they sign.

The recipient now validates the special "CA" appended
certificate/message with the previously distributed public key of the
CA. Once that is done, then they can take the public key in the
appended certificate/message and use it to validate the signature of
the sender.

This effectively adds one level of indirection compared to the PGP
scenerio ... instead of every sender needing to use a special
out-of-band process for the distribution of their
publickey/certificate, only the CAs are required to have a special
out-of-band process for the distribution of the CA
publickey/certificate. This also adds one level of "trust"
indirection. Certificates can also be organized into a hierarchy where
there are multiple levels of indirection (as well as multiple levels
of trust indirection).

Note however that for the transmission of encrypting messages, the
CA-based mechanism and the PGP-based mechanism is basically the same;
aka at some time previously the sender of an encrypted message must
have acquired the recipient's publickey/certificate and nominally
recorded/saved it in some local repository.

Don Quixote writes:
CUT and DFT? in very broad terms, the first is dumb, the second
smart.

note while the 3277 CUT was a "dumb" terminal ... in that no microcode
could be downloaded ... it was much smarter than the 3278. A lot of
the head & keybarod function that was implemented in the 3277 was
moved back into the 3274 controller for the 3278.

we had done both a keyboard mod for fast cursor (actually control the
repeat latency, plus the repeat rate for all keys). by appropriate
selection of resister you wired inside the keyboard, you selected the
rate that suited you. I had a keyboard set to the very short delay and
fastest possible repeat. It did have the shortcoming that it was
faster than the screen refresh rate ... so there was the effect of
cursor "coasting" ... you held down a cursor motion key and then had
to get use to when to let up on the key so that it would eventually
stop at the desired location.

the other modification was the addition of a keyboard FIFO that went
into the display head ... you unplugged the keyboaard from the
display, plugged in the keyboard FIFO box and then plugged in the
keyboard into the FIFO box.

the problem was that while 3270 could operate at speeds of kbytes,
tens of kbytes ... they were actually half duplex devices and had a
very unfortunate characteristic that if a screen update (from the
system, as opposed to simple keystroke copy/record) occured just as
key was being depressed ... the keyboard lost the keystroke and
"locked". You then had to hit the keyboard reset button to get it
back. for people used to full-duplex and nominal typing rate ... the
keyboard locking was a frequent and unpleasant human factor
characteristic.

one justification that was given for this characteristic was that
3270s weren't designed for interactive computing ... they were
designed for data entry, and data entry people didn't operate in
full-duplex mode (there was almost no scenerio where screen would need
updating by the system while data entry was going on).

"Fraser Orr" writes:
An interesting aspect of this is that, since the pass phrase
contains more digits than the ultimate password, and since
these original characters are relatively predictable, such a
system could readily correct errors the user typed before
conversion to the password itself. For example, spelling errors,
and capitalization errors. (Allowing the elimination of the
all to common caps lock password error amongst others.)

the problem with shared-secrets is that they have to be different for
every different business and/or security domain (aka you don't want
some kid at your ISP knowing your banking or stock account password).
As online environment proliferates the number of different business
and/or security domains goes up sharply ... as does the number of
different business/security domains each requiring their own
authentication process. In a shared-secret scenerio with passwords,
that means that the number of unique shared-secrets increases
dramatically also. So the problem becomes not only remembering a
specific password ... but also remembering a very large number of
different specific passwords. Attempts to improve the shared-secret
paradigm for one specific security domain ... is ignoring the
real-world that the requirement is for authentication in a large
number of different and independent business/security domains.

The most difficult task at the moment is not to improve the
characteristic of the something you know authentication requirement
(aka passwords/passphrases), but to 1) eliminate the "identity theft"
characteristic associated with shared-secret authentication schemes
and 2) replace the use of shared-secrets for authentication with some
other paradigm.

The design of password/pass-phrases as part of a something you know
authentication process is still valuable (in conjunction with two or
three factor authentication, i.e. something you know, something you
have, something you are), but it needs to be in the context of a
non-shared-secret paradigm ... aka being able to proove you know
something w/o having to divulge what it is you know ... and therefore
there is some possibility that the individual only has to remember a
very small number of password/pass-phrases instead of tens or hundreds
of them.

cbh@ieya.co.REMOVE_THIS.uk (Chris Hedley) writes:
I believe that the product that really was written in every conceivable
language was AllInOne (or All-In-Bits as it was more commonly known), DEC's
answer to Uniplex (on UNIX) and PROFS (on IBM mainframes)

PROFS had some amount of internal politics. The core of PROFS was an
early 0.x version of an internal email application called VMSG, an
assembler-based application written by a programmer in the UK and
freely distributed internally with source. After the co-op of an early
deverlopment version by the PROFS group, the source distribution was
restricted to two people by the author (me and one other person).

A later dispute about whether the core of PROFS was really an early
semi-functional, development version of VMSG was resolved by pointing
at that every PROFS message in the world had the VMSG author's
initials (HSL) tagged in an (not normally displayed) header control
field.

He was a very prolific programmer. One of his other applications that
saw wide internal use was Parasite & Story ... basically a virtual
terminal scripting application ... it allowed that anybody could run a
(3270) terminal emulation to nearly any system on the internal network
(ala telnet) with extensive scripting that including sophisticated
output string matching and conditional programming (in some sense a
precursor to some the later PC-based "screen-scrapping" applications).

from some dark archive (REX was the early, internal name for what is now
called REXX) ... comment header from story assemble:

jmaynard@thebrain.conmicro.cx (Jay Maynard) writes:
Uhm, it's in the base distribution of OS/360 21.8F. Might have been added
late in life, but it's definitely there as IMASPZAP. Was OS/360 revved some
after the introduction of SVS?

i remember it at least using it at least in the 14 - 15/16 time-frame
for (some) PTF application.

remember 15 didn't really ship, it was a consolidated 15/16
release. 15/16 was also the first release that allowed you to specify
where the vtoc went i.e. you could place the highest accessed data in
the middle of the pack and then array data out in both directions.

I had been doing "hand" built sysgens since 9.5, aka I would take the
stage2 output of stage1 (and "in-queue" build since 11, aka rather
than do sysgen with starter system, build with production system)
... and rather than single job with large number of exec steps, would
place a job card on each exec step, completely re-arrainge each job
order so that data would be built on the drives to create optimal arm
access order, and also re-arranged major move/copy statement ordering
to also create optimal arm access ordering.

The elapsed time to run a FORTG (single-step) fortran compile was
reduced from approx. 30 secs elapsed time (a straight starter-system
built MFT14 system with HASP) to 12.9 seconds elapsed time with my
hand-built custom MFT14.

The problem of course, was that standard PTF (load-module
replace/relink) activity had major downside effect on data placement
on disk. Load-module (in places like sys1.linklib & sys1.svclib)
replacement would effectively invalidate the old PDS member and write
the new PDS member at the first available location in the dataset.
After six months, such activity could degrade the sample FORTG test
job elapsed time from 12.9 to 20 seconds or more. "COMPRESS" of the
PDS would remove the no-op'ed PDS member and recover the space, but
didn't allow any careful member placement ordering.

Basically, every six months would have to rebuild the system (if not
doing it for other reasons like going to a new release).

3270 protocol

ab528@FreeNet.Carleton.CA (Heinz W. Wiggeshoff) writes:
Speaking of the TD4000, in the latter 70's I had one in my office to
communicate to a 370-based service bureau in both ordinary text (ASCII)
and VS/APL mode (when I switched the daisy wheel).

actually we had translate tables for both ASCII apl as well as ebcdic
apl (there was actually tables for two types of 2741, plus tty, with
apl translate tables for all three).

cbh@ieya.co.REMOVE_THIS.uk (Chris Hedley) writes:
That sort of thing is all too common; I think I've had enough of that
type of politics to last me a lifetime!

What was the outcome? PROFS went on to be quite a profitable application,
so I suppose that the issue regarding VMSG was either resolved or otherwise
forced?

they (eventually) closed peterlee and he was transferred to hursley
(peterlee closing not related to this) and told he could only work on
cics (even on his own time) ... or it could mean his job. we
participated in a little subterfuge ... i gave him a userid on a west
coast system that he could use and we covered the tracks.

PROFS group never did really fix their crippled, early development
implementation of VMSG (superior technology doesn't necessarily count
for much, as people have frequently discovered).

Anne & Lynn Wheeler writes:
PROFS group never did really fix their crippled, early development
implementation of VMSG (superior technology doesn't necessarily count
for much, as people have frequently discovered).

sometime in the past, somebody made the observation that too many
skilled/smart people on a project can prevent any solution in projects
with more than a few unknowns (the highest skilled people advocate
conflicting solutions and the rest of the group is unable to resolve
because at least, a minimum of one solution is required first in order
to resolve the unknowns).

there is another measure of group dynamics which represents the aggregate
intellectual/skill capacity that the group is able to bring to the task

HP Compaq merger, here we go again.

Anne & Lynn Wheeler writes:
so combining the two, multiple high intellect/skiled individuals can
not only nullify each other ... but can actually degrade effectiveness
of everybody's skills ... say

there was an actual project that spent a couple years arguing over
various issues and effectively had resolution paralyzes ... the total
resources to implement and deploy all possible solutions (and come to
resolution based on real world data) was nearly an order of magnitude
less than the resources that went into arguing, attempting to come up
with an up-front resolution before starting the effort (not to mention
that having even the least optimal solution in the market-place was
orders of magnitude better than having no solution).

this led to some number of successful executives who could make snap
decisions, even if they were all least technically optimal ... and
still come out way ahead (even random selection can work & don't confuse
me with facts).

Scot Wilcoxon writes:
Yes, Unix doesn't isolate users as thoroughly as some other systems (the
obvious example for Unix is its inspiration Multics, where Unix
intentionally is simpler than Multics). But is this security philosophy
which protects against virus/worm attacks.

When the system itself cannot be altered by users, a malicious program
is restricted to damaging user resources. The most common Unix weakness
is that user programs can use more than their fair share of resources
such as memory and I/O channels. Malicious programs can damage user
data -- although to classic Unix systems that is indistinguishable from
other user programs.

a major percentage of unix & network exploits have been

1) buffer overruns, number and frequency a direct result of common
C-language string handling semantics (aka systems with other semantics
are far, far less prone to such explits)

2) becoming much less frequent over time (at least in unix),
university &/or novice developed applications targeted for co-op
environment with significant back-doors for things like
ease-of-maintenance

#1 has been a common characteristic in all C-language based environments
(regardless of the system)

More recently, a whole new category has appeared associated with
automatic scripting (& misc. other "ease-of-use" based) exploits. I
first ran into such a (network auto-scripting) problem in the early
'70s and it was addressed at that time .... but it appears to have
come back with a vengence in the past couple years.

3270 protocol

jot@visi.com (J. Otto Tennant) writes:
"Crippled?" Doesn't that depend on the design of the application?

the original comment was that we had a list of complaints about 3270
(like if you were typing at the wrong moment when the system updated
the screen), the keyboard would lock and you would have to hit reset
(and anything you happened to type in the period was lost). the
response was that 3270 was designed for data entry ... not interactive
computing. That doesn't imply that it wasn't better than many things
that had been supposedly designed for interactive computing (sometimes
raw speed can count) ... it just that the 3270 could have been better
(like the FIFO keyboard buffer in 3277 to overcome the keyboard lockup
problem). Not only was it designed for data entry, that who would
possible want to be typing when the system had something to do.
Another was trying to reposition the cursor on the screen, the speed
was ... chug, ... chug, ... chug, ...

So does anybody remember the up/down, scroll-up/scroll-down,
page-up/page-down editor wars with edgar/red/xedit/etc (from the early
to mid 70s)???? aka in effect, "were the commands done with respect to
the program or the human"? edgar was with respect to the program,
"scroll-up", in effect moved the document "up" (viewing moved down).

"Tzvetan Mikov" writes:
I am not sure what you mean by "multithreaded kernel". The nature of the
tasks performed by the kernel usually doesn't require a thread. Although
worker threads are used to carry out some tasks internally, most of the work
in the NT kernel is not done in such a context. There are asynchronous
queues for IO requests, timers, interrupt processing, etc. Each queued
procedure is executed as soon as possible on any available CPU without a
context switch.

If by "kernel" we mean all parts of the OS that happen to execute in
privileged mode (file system drivers, IO drivers, etc), then the kernel is
most definitely multithreaded. On the other hand, I don't think the term
"multithreaded" can be applied to the micro-kernel.

frequently it means to the degree that multiple different
processors/CPU can be executing in the kernel concurrently. Sometimes
this is dependent on the locking model ... does it apply to code or to
data structures ... and the granularity of the locks (i.e. how many
different processors be executing the same low-level code
concurrently, even microkernel ... or are some processors/cpu going to
be held up ... either in spin-locks are other serialization methods?).

this sometimes shows up in less than linear scaleup as you increase
the number of processors ... for certain types of workloads ... say an
ip-intensive workload, that based on ip kernel pathlengths is capable
of consuming 16 processors just in kernel code; would going from
8-processor configuration to 16-processor configuration double the ip
thruput?

"GerardS" writes:
On the VM/CMS systems, as soon as the user pressed enter (or some other
function key for example), the VM system immediately sent back an
unlock, thereby minimizing the amount of time the keyboard was locked.
On channel-attached 3270s, this was almost instantaneous, so that I
could hold down the ENTER key (making use of the typomatic key), and
the CMS (VM) system would be able to process all the ENTERs. I would
do this while in XEDIT mode, or plain CMS mode, and the system would be
able to keep up with the multiple (if not infinite) ENTERs). One could
also XEDIT a very large file and just keep the PF8 (down) depressed and
watch XEDIT scroll down the file.

STL was bursting at the seams ... to they needed to relocate about 300
people from the IMS group to remote, leased building about half-way
between STL and the main plant site. Rather than make them endure the
horrors of remote 3270s, they were going to get local 3270s with
(HYPERchannel) as channel extenders. There was some issues getting the
support to run over a dedicated T1 line since it was only
1.5mbytes/sec ... even tho the aggregate data rate didn't tend to
exceed the line rate, the channel attach 3270s had a higher burst rate
and so there were some speed-matching and pacing issues.

One of the unexpected outcomes of the whole thing was that the
mainframes back in STL started running 10-15% better thruput. Standard
operating procedure up until then had been to spread the 3274 control
units across all available channels on the same channels with disks
controllers (just because the disks had been spread across available
channels for load-balancing leaving a lot of unit addresses available
on each channel).

The HYPERChannel support that I wrote used a single HYPERChannel A220
on the mainframe end to drive the T1 and the 3274 controllers (for
300+ 3270s) and misc. other channel controlers at the remote site.

The problem was that 3274 controller could burst data transfers at
channel speed to the 3270s ... it had significant control overhead
that resulting in significant channel busy. It turned out that the bad
3274 channel busy overhead was causing signficiant interface for disk
activity.

The HYPERChannel A220 controller hung on the mainframe channel had
significant better (tremendously less) channel busy overhead compared
to the 3274s. Remoting all the 3274s out to HYPERChannl A51x channel
simulators (and performing the actual channel operations on an A220)
turned out to not degrade the thruput perceived by all of the 3270s
AND signficiantly reduced overall channel busy across all the
mainframe channels significantly improving disk thruput resulting in a
10-15% overall system thruput increase (hows that for a run on
sentence).

The above helped overall system thruput and helped mask the poor
channel busy characteristics of channel attach 3274 controllers.
After that, there started being a lot more recommendations about 3274
controller placement to minimize interference with disk thruput. SNA
3274s couldn't really be considered an option since the resulting 3270
performance characteristics were really terrible for interactive
computing.

Those controller characteristics were orthogoanl to keyboard lockup
issues ... I kept my trusty modified 3277 (fifo keystroke buffer and
modified repeat key operation) up until almost 1990 (as backup, long
after I had 3270 emulation on PC and could program around the short
comings).

SMP idea for the future

"Tzvetan Mikov" writes:
It seems that NT should be doing well in that respect since it has been
designed from the ground up for SMP. I haven't had the luck to examine its
kernel sources :-), so I can't say anything with certainty, but it seems so.
On the other hand, I haven't had first-hand exerience with NT running on
more than 2-CPUs and of course I have heard the rumours that NT doesn't
scale well for more than 16 (or even 8?). I don't know whether it is true
and to what extent it has to do with x86 server hardware design (aren't x86
servers just pumped up PCs?)

i don't know what it is now ... but a couple years ago ... I was told
by numerous knowledgable people that ip-intensive workload wouldn't
even come close linearly scale from 2-processors to 4-processors and
that extensive kernel work was going on to fix large numbers of
non-fine-grain locks (and 2-processor thruput was based on one
processor basically executing application and one processor executing
kernel).

Something can be designed to have locking support for SMP ... but
still not be able to scale. Lots of easy SMP implementations would do
gross-level serialization locks on major blocks of code ... allowing
the kernel to run on multi-processor hardware (put effectively kernel
only executing on one processor at a time) ... typically relying on
large amounts of non-kernel application execution to achieve thruput
aka little or no "effective" kernel SMP threading with horrendous
resulting thruput characteristics for workload requiring significant
kernel pathlength.

Is anybody out there still writting BAL 370.

"Hank Murphy" writes:
Several people have already referred to the presence of assembler in banks.
Even in banks which are successful in re-engineering their old assembler
mainline processes into C++, there will still be a need for assembler
programs and programmers to work with something called CPCS (Check
Processing Control System). There are only two makers of large-scale check
sorting equipment (IBM and Unisys (ex-Burroughs)). NCR and some smaller
manufacturers concentrate on the smaller end of the business. So, if you
maintain an IBM mainframe, you are going to have assembler in banks.

Allodoxaphobia writes:
Of course, no Good Idea goes un-noticed for long.
Eventually, SuperZap 'escaped' into the user/customer
world -- just like CICS did (he said, starting a new thread drift.)

Jonesy
IBM MainFrame since 1966

CICS was developed at a customer shop .... that IBM then picked up an
made a product. I was undergraduate at university that was one of the
original IBM CICS beta-test sites and had to "shoot" some number of
bugs for the university.

Why is UNIX semi-immune to viral infection?

Len Budney <lbudney-usenet@nb.net> writes:
No, you've missed the point. Once you've grouped TWO unrelated services
under one UID, the mistake is made. I don't give a crap how many MORE
you then cram into the same UID. Therefore, qualifying ''catch-all'' in
any way is pointless. Had Maclaren advocated putting two demons under one
UID, let alone a ''catch-all for scruffy little services'', my reaction
would have been the same.
...

Give me a break. Just stop hiring brain-dead administrators. ''We make
a point of insecuring our systems, because securing them confuses the
admins.''

at some point you need to recognize that while something is
technically possible, that it may be practically difficult or near
impossible (the line about "in theory there is no difference between
theory and practice, but in practice there is", for instance it may be
possible all the non-brain-dead admins have already been hired, or
somebody just increased the salary bidding by a factor of three
times). In those scenerios it may be necessary to change the paradigm
in face of real world conditions in order to achieve the objective.

DEC midnight requisition system

pechter@i4got.pechter.dyndns.org (Bill Pechter) writes:
I was at DEC in '85 -- which authorized me to break the logistics manager's
door down at DEC's Penn Plaza NY location to get a set of roms to fix a
problem which we in NJ were sufferring with for days when an idiot manager
put a third-party controller under maintenance and had the controller
type wrong and no parts or docs to fix it.

The logistics manager was out on his boat on Friday night.
He was "unavailable" according to their staff.

They thought it was Emulex -- it was Plessey. They thought that they
had spares... -- they did for the Emulex.

Anyway the roms needed to change the Emulex emulation to the same as the
failed Plessey (so the customer's OS would work with it without a new
sysgen) were under lock and key at the DEC logistics manager's
desk. I was authorized to use the fire axe to free them and bring them
to New Jersey.

When NY's managers heard I was headed their way someone got to him
via Ship-To-Shore radio or he finally answered his beeper. They
delivered up the parts before I got to crack the office and desk.

I also once stole 5 brand new RA81 HDA's from DEC's Princeton,NJ office
to go to FMC's Princeton Vaxcluster.

ok, when lincoln labs discontinued the duplex 360/67 ... the moving
van was called to pick up and deliver the machine back to the
manufacturing plant in kinston.

cambridge wanted a duplex machine for doing the SMP work (in part
directly responsible for charlie's work that resulted in
compare&swap instruction) ... had a simplex 360/67 but not a
duplex. The head of cambridge called the moving company and told them
that there was a change in delivery, that the lincoln '67 was to be
delivered to 2nd floor, 545 tech. sq. in cambridge (not to the
kingston plant). It took the kingston plant another six months to
track down somebody in cambridge to ask if a 360/67 duplex had been
delivered there by mistake (all before my time).

"Charlotte Cannaert" writes:
Hi folks,
I am new on the net and I am very interessed in internetbanking; it's
extremely easy and handy. In my opinion this is the future.
Still I am very worried about the security of this. Nowadays the net isn't a
safe place anymore...
Is it really safe, can people 'steal' money?
Does anyone has some experience with it?
What can I do to protect myself to intruders?
Please tell me lots of things about this issue... it would help me very much
Thanks alot everybody,
CS from Belgium

note that internal network & facilities were contemporary with arpanet
although was a larger network with more participants from just about
the beginning up thru circa '85 (including multiple PROFs precursors
in the '70s).

electronic communication on the same multi-user time-sharing machine
ala email dates from the 60s. that was extended to networked
multi-machine "email" with pieces of the internal network in the 1970
time-frame (link between cambridge and endicott).

SMP idea for the future

"Tzvetan Mikov" writes:
I naive question perhaps, since I am not a TCP/IP expert, but why does the
TCP/IP stack need threads of its own at all? I imagine an implementation
could be entirely event driven: user requests (send()/recv()/etc) are
processed in the context of the calling thread, network packets are queued
and dequeued on interrupt, timeouts are handled in an asynchronous timer
routine. I don't see anything that needs to execute continously in a thread
of its own.

low level IP code that utilizes some number of shared buffers and
queues common for all IP. Lower-level device driver code that also has
some number of shared buffers and queues. simple SMP implementations
tends to implement locks &/or serialization primitives on logical code
minimizing concurrent code SMP execution issues that would have to be
handled.

With a large number of processors and applications all pushing IP
output traffic down the stack threads can bottleneck on serialization
primitives.

Very high-speed network hardware with lots of incoming traffic can
generate (incoming) events that all bottleneck on serialization
primitives.

This is true of almost any kernel function/resource that lacks
extremely fine-grain locking & serialization primitives. At the
simplest, the whole kernel has a single lock implementing large
granularity serizliation (i.e. kernel can only be executing on one
processor at a time, regardless of number of processors). At the other
extreme every couple instructions has extremely fine-grained
locking/serialization operation when possibly shared resources are
involved (worst case is that there is more processor cycles executing
serialization functions on the off-chance that two or more processors
might be attempting to execute the same exact set of instructions
concurrently). In part, because serialization semantics can result in
consumption of CPU cycles ... there can be a trade-off based on the
granularity and amount of serialization operations against the degree
of concurrency that can be achieved.

One example to workaround solution can be to go back and totally
redesign the implementation for high concurrency at the same time
minimizing serialization semantics overhead. One such solution is to
attempt to batch allocate and/or reserve all dedicated resources
that have some possibility of being requested by the event, then dole
out the resources as they actually requested from the event/thread
specific pre-allocation (the penalty for pre-allocating some resources
that aren't actually used is traded off against the reduction in the
amount of serialization semantics that have to be used during the
course of the event handling).

I-net banking security

"lyalc" writes:
ID and passowrds are the only widespread means of authentication today, and
tomorrow if PKI gets adopted at all.
Remember, certificate/private keys require passwords to allow their use.

intranet security and user authentication questions

Webmaster writes:
once all 500 user accounts are created. Am I barking up the wrong tree
here by using .htaccess? What's a better solution? I am also thinking
ahead, ultimately I need the site to deliver content dynamically and
specifically only to the particular user that logged in based on their
sign-on. Is there anything (esp. in linux/Apache) that I can use to
control this? Any advice, tips, sites, references that anyone can give
is sincerely appreciated.

traditional web has been to place a hook in the webserver client
authentication stubb (nominally effectively a no-op). this is
frequently a roll-your-own implementation using a flat file with
userid/passwords.

another approach would be to hook the client authentication stub with
radius interface ... and then support all flavors of whatever radius
supports (i.e. selectable on an account by account basis).

THe corporate environment may already be using radius for client
authentication for internet->intranet or dial-up corporate intranet
access.

bq434@freenet.carleton.ca (Yvan Loranger) writes:
You contradict this below, 2 times. A program's pages can offer
structure, especially if they are about to be paged back in. So
readahead could be beneficial, (offhand I'd say especially at track
boundaries).

MVS & VM introduced the concept of "big pages" around 1980
... initially targeted for 3380s. When time came for certain page
replace operations ... a collection of a task's pages were selected,
equivalent to a track's worth and written out as a "big page". On a
page fault, if a page was a member of a "big page", the "big page" was
brought in as a unit. There were various rules about what was
considered candidates for membership in big page ... and various kinds
of trimming and segregation went on to try and cluster meaningful
members of big pages. At that time, a 3380 track held 10 4k-byte pages
... and so the implementation adopted from doing nominal 4k-byte page
I/O operations to 40k-byte page I/O operations.

part of the motivation was the significant increase in 3380 data
transfer (compared to prior disks) w/o any compareable increase in arm
access and rotational delay. in effect, doing "big transfers", while it
may increase real storage requirements ... was otherwise utilizing a
resource (disk transfer) that was significantly underutilized.

However, PKI is still a 2-factor system; something you have (the
certificate), and something you know (the PIN).

Standard ID/Password systems are only 1 factor (something you know, you just
happen to know 2 things).

(Conversely, you could add, in place of the "token" or certificate,
something you are e.g., a voice print, retina print, finger print, etc.).

in 3-factor authentication, both something you know and something
you are can be implemented as either a shared-secret paradigm or a
non-shared-secret paradigm, (aka ... the PIN value and the biometric
template ... can be considered somewhat equivalent authentication
"values" ... one difference (in a shared-secret paradigm) between PIN
and biometric ... is that if the value becomes compromised, it is
somewhat easier to issue a new PIN than it is (at least given current
technology) to issue new fingerprints (or fingerlength, or retina, or
whatever).

it is possible for both something you know and something you are
in 3-factor authentication ... to be implemented in non-shared-secret
paradigm ... i.e. given a personal hardware token, the hardware token
does a "match" on an entered PIN or an entered biometric value ... and
then operates appropriately.

x9.84 (biometric financial) standard has huge amounts of security
recommendations surrounding biometric authentication when implemented
in a shared-secret paradigm (because of the significant risk and lack
of easy remediation when a value has been compromised; aka new finger
grafts?).

Bernd Paysan writes:
The future is true zero-copying (with scatter-gather, so that the OS can
create the headers in a separate memory). The main problem here is a
brain-damage in the protocol, because it puts the checksum before the
data, so you still have to read the data completely to create the
correct header.

Eric Sosman writes:
My hope is that the latter will follow from the former
more or less automatically. A programmer who reads code from
diverse sources (or sources in diverse codes?) will form some
ideas about what makes for Good or Bad code. If said programmer
has the brains God gave little green apples, these ideas will
come to influence his or her own output.

a programmer that does really well in a programming language would
need to be proficient ... in the same way that people are proficient
in natural language. There is the old saying about thinking & dreaming
in the language ... as opposed to thinking in some other semantic
context and having to translate; aka the programmer would literially
"think" in the (programming) language.

The standards for "good" &/or "proficency" in a language for people
that don't actually think in the subject language are totally
different (i.e. it is usually pretty easy to recognize speakers who
are constantly having to translate from some other context when they
are speaking as well as programming).

In the mid-80s there was somebody that sat in the back of my office
for 9 months and took notes on all my communication. All my email and
immediate messages were also logged and analysed (there was some
statistic that for the 9 month period, that I avg'ed email
communication with 275-some different people per week). It resulted in
a Stanford PhD thesis ... joint with language and computer AI (i.e. it
was looking at computer mediated communication and how it compared to
natural language communication). There was some comment that the
analysis indicated that I might be more proficient in some programming
languages than in my native english.

(atomic) lockless manipulation of data structures is from Charlie's
work at Cambridge Science Center on compare&swap instruction from the
late '60s and made it into the 370 mainframe machines (early
'70s). The original work was to address SMP serialization issues
... but got push-back about adding an instruction to the architecture
that was SMP specific and POK architecture (padegs & smith) requested
a paradigm/description showing its use in uniprocessor envrionments
(birth of the compare&swap programming notes that appeared in the 370
principle of operations ... effectively multi-threaded applications
regardless of uniprocessor or multiprocessor).

SMP idea for the future

Anne & Lynn Wheeler writes:
(atomic) lockless manipulation of data structures is from Charlie's
work at Cambridge Science Center on compare&swap instruction from the
late '60s and made it into the 370 mainframe machines (early

"Walter Rottenkolber" writes:
I'm often surprised at how few new ideas there are in computing. What often
seems new is really new technology that allows for an old idea to be
implemented at an affordable price.

I imagine a lot of wheel reinventing happens because the old information is
proprietary. It's also a time to take a second look and see if there isn't a
better way.

i think there is an old(?) saying about computer science having a
brain wipe every 5-6 years (allowing a lot of freedom with regarding
to re-inventing wheels).

there was a posting about a '98 paper regarding non-blocking semantics
for multiprocessing concurrent operation ... and then somebody
referenced that there was some even earlier work with some '88
paper. I got to reference the original compare&swap work that charlie
did at the cambridge science center in the late '60s which resulted in
the compare&swap instruction going into standard 370 machines in the
early '70s and the description regarding non-blocking semantics for
both multiprogramming as well as multiprocessing. Also, as an aside,
the term compare&swap was something that had to be conjured up to go
along with the mnemonic CAS ... because C.A.S. are charlie's initials.

Is anybody out there still writting BAL 370.

Wild Bill writes:
It is the physical key of the record. It is pointed-to by a CCHHR stored in
the VOL1 record, which is firmly planted at CCHHR 0,0,3 of every disk
volume.

and the ability to specify the cylinder of the vtoc was introduced in
os/360 15/16 ... prior to that the vtoc was at fixed location .. prior
posting in this thread
http://www.garlic.com/~lynn/2001k.html#37

it may not be so significant now with all the caching that goes on
... but it made a difference along with all the other stuff that i had
done for careful disk location layout of system data (on 2314s).

"John Hadstate" writes:
Suppose I am trying to transmit encrypted data through a
relatively noisy channel. I can add enough redundancy to the
message to allow me to recover almost all message blocks
without requesting a retransmission. Could the addition of the
error-correction envelope around the ciphertext in any way
compromise the security of the encryption? Is it possible to
combine the enciphering step with adding error-correction so
that we produce a cipher with error-correcting properties?

that would imply that there were some very straight-forward operations
that could be applied to ciphertext that would yield the plain text;
if that turned out to be true ... those operations could be applied by
anybody; which would imply weakness in the cipher.

in the '80s one of the companies prominent in reed-solomon encoding
developed a strategy involving (for some FM-radio digital system)

selective resend of the Viterbi 1/2 rate code implies that recovery
could be performed even if the original and the selective resend were
both received with uncorrectable errors (with 15/16s reed-solomon).

HP Compaq merger, here we go again.

ab528@FreeNet.Carleton.CA (Heinz W. Wiggeshoff) writes:
And the personal hygiene facilities were extremely limited. One
German submariner commented that he couldn't stand the smell of
No. 4711 Ko'lnish Wasser after his stint in the Uboat. But if
you think about it, having a shower in a sub is a little like
the proverbial screen door - makes surfacing a bit dicey.

i went on a 100 mile hike in the sierras in the early '80s; coming
down into yosemite valley from the back-side (up and over glacier(?) pass
after having stopped at devils postpile on the back side) being up
above 10k feet most of the trip ... most bar-soap and/or tobacco smell
could be recognized a couple hundred yards away and resulted in a
nausea re-action. It was a month before I could enter a public place
like a large grocery store and not have a nausea re-action if there
was even a single person on the opposite side of the store having
recently smoked and smelled of tobacco.

barry.a.schwarz@BOEING.COM (Schwarz, Barry A) writes:
Central storage is where your programs must reside to execute. It is where
all data and instructions addresses resolve to. It is also where your data
buffers must be to do I/O. It is addressed at the byte level.

Expanded storage is like cache or a ram drive that allows memory to be used
as if it were an I/O device. One common use is for frequently accessed page
frames that are not quite frequent enough to stay in central storage. It is
addressed at the page level (I think I have this correct).

expanded storage original on 3090 had wide transfer bus and a
synchronous page transfer instruction (i.e. the elapsed time for the
instruction was significantly less than the pathlength to schedule a
page i/o and process the resulting interrupt). the expanded storage
bus on the 3090 was the only place that had the transfer rate that
could tolerate a HiPPI connection (800mbyte/sec transfer) so instead
of hooking it to any i/o inteface ... HiPPI interface & HiPPI devices
(various disk arrays, processor-to-processor, etc connections) were
grafted into the expanded storage bus. It was a little hoky since you
couldn't do channel programming ... it was a little more like
peek&poke PC I/O programming ... where transfer commands were moved to
"reserved" axpanded storage addresses.

from some long ago archive
Re: Extended vs. expanded memory
just to "refresh your memory"...

"Extended memory" refers to RAM at addresses 100000-FFFFFF. Although
the PCAT only permits 100000-EFFFFF.

"Expanded memory" refers to the special Intel/Lotus memory paging
scheme that maps up to 8 megabytes of RAM into a single 64K
window beginning at absolute address 0D0000.

"Expended memory" refers to RAM that you can't use anymore. It is
the opposite of Expanded Memory.

"Intended memory" refers to RAM that you were meant to use. It is
the opposite of Extended Memory.

Disappointed

Stewart Morin CES1999 writes:
I noticed that there were a lot of sarcastic and ignorant replies to
Ryan's plea for help. I am also undertaking the essay which is a 3rd
year masters introductory CAD essay. We realise there are the
conventional avenues of research such as libraries and search engines.
However being a news group dedicated to CAD we thought the advice from
some, so called, experts would help us make a good start to the class.
We also thought a news group was a forum for intelligent chat,
conversation and sharing of ideas not an excuse to boast about ones
intellect and belittle those with less knowledge than others. I hope
this only applies to a minority of users in this news group and is not a
reflection of all subscribers.

for quite some years ... there has tended to be a rash of homework
requests around the start of fall semester posted to various usenet
groups. strong negative response seems to have been the most effective
in curtailing such extraneous & unwanted postings. Polite answers have
tended to be much less effective ... in part because that population
have been somewhat self-selecting; 1) not having done sufficient
research in various of the usenet etiquette documents before having
made the postings in the first place (aka RTFM), 2) not spent some
amount of time lurking and experiencing usenet etiquette before
jumping in as if they knew what they were doing (in effect, dis'ing
the regular group participants).

Lastly, the observed behavior may be standard behavior for postings in
some usenet groups (nothing personal).