Consoles in use ,
enlargement -
from Joseph Sickel
(August 2005, Mike Loewen informs me that these " ... are
actually from the AN/FYQ-93 (the ROCC), the successor to SAGE. A minor 20
year discrepancy. :-) ".

It is useful to think of the SAGE computer as the heart of a process control system.
- Inputs from "sensors", primarily multiple radars via
FST-2,
FAA traffic, and operator consoles.
- A computer to present data to operators for human decisions.
- Multiple outputs to "actuators", fighter commands, missile command centers.
With communication with adjacent systems for continuous coverage of a larger area than can be handled by one system.

Dual Processor

one on line,

other trainning, maintenance, hot backup

much communication to remote sites, flying manned/unmanned vehicles

Hot plugable modules, did not have to remove power from the
rest of the machine to remove/insert a plug-in module.
(An OFF/ON switch for each module.)

a 32 bit machine, left half was op code, right half address

The SAGE system was component of a larger system called
416L North American
air defense system.

I revisited the SAGE question, concerning the oscilloscope function towards the left end of the supervisory/maintenance panel, with Bill Flora last week. Bill was a field engineer on SAGE and he worked on a SAGE site in Oregon as an input/output system specialist.

Bill said that he never worked with this function, which may indicate that it was a feature that was not used a lot. But Bill pointed out that to the left of this apparently blank panel, there is another panel with a number of coaxial (signal) connectors, which are each labelled with a signal source in each of various key areas about the SAGE system. The connectors are like that above the panel (type BNC), which implied to him that using a "patch" cord, a FE could use the oscilloscope function to monitor the status of the various parts of the system.

SAGE was a vacuum tube system, and it was common practice to check the health of such systems (likelihood of near-term failure) by reducing the supply voltages of individual circuits or groups of circuits by reducing the supply voltages to the point of failure. By tracking this information over time, it was possible to predict an imminent failure before it occurred. This process was used regularly in SAGE because of the very large number of vacuum tubes, all of which had to function in order for the system to do its job. The process was called "marginal checking," and Bill thought the feature on the panel that we have been wondering about may have sometimes be used as a part of the process. Of course, marginal checking was only done on the system that was offline, rather than the one that was in use at the time.

This long winded explanation is probably somewhere near the truth. It clearly was a built-in test equipment feature that was made available to the field engineers to be able to monitor the health of the SAGE system. These features and the expertise of the field engineers combined to maintain an excellent record of reliability.

I hope that I have not beaten the subject to death.

Dave Bennet

Radar Data, including azimuth & elevation estimate, was digitized, time tagged, and transmited to SAGE
by the FST-2 and it's descendents. Links include:

Jim...The "Semi-Automatic Ground Environment" (SAGE) was named that because of the human input needed to collect the data, not to make decisions in any way.... Even today decisions are still decided by people and not computers... To aid that idea SAC even instituted the "failSafe"; program.

What made it automatic was the program flow of the SAGE computer.... In simple terms the system used the search radars to "find" the target in horizontal space and provide the azimuth to the computer..... the computer then sent back the azimuth to the height finder and slewed the HF to the correct azimuth......here is where it became semi automatic..... it knew where in horizontal space the target was but had no idea where in vertical space it was.....so it needed human input to computer by having the operator take a horizontal cursor and place it over the target on the HF scope and to then have the operator press a button teliing the computer that it could now take the human input to the computer of the height of the target...... once the computer where in 3d space the target was THEN it could make predictions for that target....
In todays world the button push was the same as clicking the mouse cursor.... For all it size and complexity the average SAGE computer was about as smart as your average tablet today...Even computers still need humans to interface with the real world..

> Well if that system works as described, when it encounters radar jamming
> it will send a horde of pulses to the computer thus jamming it.

I must agree with Les -
As described in useful detail in the article -
[
AN/FST-2 Radar-Processing Equipment for SAGE"]
the radar video comes into the AN/FST-2
along with sync pulses (from the pulses being transmitted)
and with antenna azimuth at that moment -
( The radar pulse rate was 330 PPS )

Assuming one target ;-))
the AN/FST-2 makes range and azimuth "buckets"
and looks for the "left" and "right" edge azimuths of the target
in that range. When the "right edge" of the echo is
detected, the equipment determines the center of the edges
and transmits the center azimuth, and range (bucket) to SAGE :-))

And the AN/FST-2 and SAGE could seemingly work with are reasonable number of targets.
All very nice unless someone is trying to make a mess of your scope (video).

Let us assume one simplistic jammer, just transmitting radar noise steadily
at you - that noise makes a radial line on your scope and video -
The AN/FST-2 has signal at all ranges near that azimuth -
When the antenna rotates away from the jammer, (no more signal)
the AN/FST-2 will report planes at all range buckets at the center azimuth -

OK - so SAGE "knows" there are say 100 aircraft (assuming 100 range buckets)
in a line on that azimuth -
just great :-(( all caused by one jammer :-((
Unfortunately
a) there is likely more than one jammer :-((
b) various other jamming techniques makes the scopes (and video)
even more messy
c) soon the AN/FST-2 communication line would be jammed with bogus targets
d) and SAGE has way more "targets" than it can possibly analyze/track

Unfortunately, my Army Nike missile site never saw jamming while I was there. (I left in 1957)
( I have seen PPI scope photos of jamming during "exercises" in Germany involving Soviets.
http://ed-thelen.org/ecm_ppi_3.jpg What a confounded mess !! )
A few years later T-1 trailers which simulated jamming of various sorts
were available to raise heck with, and train, Nike tracking operators and tracking supervisors
to resist/fight/track-through jamming.
(And the new Nike Hercules tracked targets using two different tracking radars
using two different radar bands ( X & Ku ) - the Army eventually took jamming very seriously !! )
(The tracking supervisor had a display of active radar frequencies
and a number of controls for transmitter frequencies, pulse widths,
and other techniques to try to dodge/minimize jamming to aid the tracking operators.)

I'm near the limit of my current knowledge - bowing out, with my ears on -

The link describes some equipment manufactured by Lewyt and I believe the question above is directed to a statement about the range processed by the AN/FST-1. The AN/FST-1 used “slowed-down video” (SDV) to process the radar data from a Gap Filler radar (most of them were AN/FPS-18) to the parent radar site or directly to the Direction Center (I think the latter was possible). Gap Filler radars were short-range units and I think 64 miles would be a very probable range limit for one. They were positioned to provide lower-level coverage within holes in the coverage provided by the major LRR sites. The entry for the AN/FPS-18 in the Radomes equipment list gives 65 nm as its range.

I said above that I thought it might have been possible for the Gap Filler data to go directly to the SAGE site, but I am not really sure. I don’t recall having seen evidence of it at Adair AFS (PoADS). There was an odd device sometimes used with Gap Filler data in the manual system that used multiple little PPI scopes displaying the SDV from the associated GF sites with high retention, each of the little PPIs precisely located relative to the main LRR site position at the range setting specified. A scan by a rotating photosensor was then used to detect the blips displayed by the GF radar(s) and that information was superposed over the main radar to supplement the coverage. I did not learn how the AN/FST-1 worked and never saw any use of the video from the 2 GF sites associated with Mt. Hebo AFS. Both its Gap Fillers were never in operation when I was assigned there and both had been decommissioned before I went to Adair.

Rich,
I was 656th Saratoga Springs 1958-1961. We had 3 gap fillers. There was no one at the gap fillers. They were controlled through the main site with AN/FSW-1. Turning equipment on and off and switching channels through FSW-1. If could not be fixed through FSW-1 link then someone was dispatched from main site to do repair. We sent a computer maint tech and a radar maint tech to do repair. Some sites very remote and made a hell of a drive ion the winter.

Jack Kerr
-----------------

Hi all. Spent about 3 years on the sites out of Homestead, Fl..had 3,
Naples, Long Key, and Jupiter...all FPS 18..great radars, great assignment..

-------------------

So you were stationed at Homestead and traveled to the Gap Filler sites when repairs were needed, and there were no personnel actually stationed at the Gap Filler sites?

Thanks

Rich Bettencourt

--------------------------------------

I covered Gap Filler Sites out of Osceola, Wisconsin.

Osceola had a radar section of about 9 that maintained 4 remote Gap Filler Sites from the Osceola location.
A team of 2 airman would be assigned to a remote site to which they would travel to for about 2 days a week.

For large projects, the assigned personnel for that site would get help from 1 or 2 of the other teams.

The team for the site was responsible for all elements including grounds maintenance and snow shoveling. One of the teams would always be on standby should one of the sites go down between visits and would travel to the site day or night to repair.

-----------------------------------

Thanks. Sure helps explain things. I was amazed to see how many gap filler sites there were when I checked it out on Radomes: 131. Imagine what it cost to put all those sites in and for such a short period before they were nixed.

The other thing was where many of these gap filler sites were. They obviously were located in places where coverage was needed and not necessarily easily accessible by those who provided support and maintenance for them. I suspect that contractors brought in fuel now and then, so there must also have been contracts with the locals to keep access roads cleared at sites impacted by heavy snow.

Did these support teams have their own fleet of special vehicles?

---------------------------------

Re rich question Anyone stationed at Gap Filler sites back in the day?

Rich I spent over 5 years assigned to Gap Fillers and visited several others for assistance. All of the GF sites in US were unmanned. Several outside the US main land were manned because of their extreme location.

The first GF I was in charge of was close, less than 50 miles from main site. We went to site almost every day to just check things out. Rarely did any maintenance as equipment was very reliable. The GF was our only assignment and we went to site just like a regular crew would checked out main site equipment. We were on call nights and weekends. Rarely got called out because of the dual channel equipment and ability to reset or change channels via the FSW-1.The other GF I was assigned to was located to far to visit daily so we would go out on a two day trip once a week. We had two GF to take care of and two two man crews. We went every week to each site and had call for nights and weekends. Again rarely got called out.

We had dedicated individuals assigned to GF maintenance. Not every GF site had that and it usually showed on the reliability of the equipment. Just like any other radar equipment of the time, if properly maintained the equipment was reliable.Radar was FPS-14 or FPS-18. Data transmission was via the FST-1. Communication to site was via the FSW-1. FSW-1 had voice link as well as circuits to monitor and reset/change channel. The sites had a standby diesel generator that was totally automatic

Individuals assigned to site had to know Radar, Data collection and transmission via FST-1 as well as control via FSW-1. Knowledge of diesel and electrical control circuits was helpful. Power to the equipment was via a large electrical motor generator setup. When a power failure occurred the diesel would start and provide power to the motor generator in less than 10 seconds. The rotational inertia of the motor generator would keep supplying power so equipment did not shutdown.

Sites did not have beds or cooking facilities so local motels were the place to go. There were not any bathroom facilities unless someone put in an outhouse.The last two sites I was assigned to had hot and cold running water. We dug a well and used the heating boiler to heat the water. We installed a sink for washing. We also put in a suspended ceiling, tile floor and build a wall to separate the equipment to make a small office area. We installed a tool board so we did not have bring tools with us. Transportation was via GSA vehicles Grass had to be mowed as well as the normal site maintenance work.
To me, GF duty was easy and an excellent job.

al dennis

-------------------------

Hi Ed…I spent a couple of years at Richmond AFS (Homestead) maintaining the same three gap fillers (1962-1964). I totally agree…the FPS-18 was a great, reliable and very maintainable radar! Bendix did good with that one!

-------------------------

the 792nd N. Charleston, SC. had 3 gap fillers. one on the back side of Paris Island (yes the Marine base near Beaufort, SC) always fun going there.... one just south of Georgetown, SC and one at Stateburg, SC (near Sumter, SC). We had 3 - 2 man crews assigned to gap fillers. we usually spent 2 days per week at the site doing maintenance, etc. when not at the site we were on standby 24/7.. what we did was every third week (when not on site) one crew would pull standby for all the sites, including the weekend. back in the day, there were no pagers or cell phones, so on your week you had to be near a phone 24/7... some weekend you would get an opportunity to visit all three sites...all the crews were 303x2, radar maintenance, and we took care of all the equipment on the site including the power generator...other than the standby, it was pretty good duty. we had an AF survey team visit our Georgetown site (my site) to evaluate our capability. Had a T-33 come in at ~ 100 feet and we were able to track to within 2 mile radius of our site... the search radar at N. Charleston could not pick up aircraft at all. ie. the term "gap filler". mainly against manned aircraft and low flying surface launched missiles. would give you a little more reaction time in case of attack. and we all know (knew) what the first target would be during a attack.

Fowler - 65 - 73

-------------------------

I spent a year at a manned Gap Filler at La Scie Nfld. from 1957-1958. It was Det#2 of the 921st AC&W Sq. at St.Anthony. We had about 20 people including 4 Canadian civilians who maintained the generators. We communicated with St.Anthony via HF radio. Just before I left they installed a tropo scatter radio. Transportation in & out was via SA-16 seaplane from Gander RCAFB, or float equipped Beaver aircraft. We got our mail by paradrop from C-47 Goonie Birds. TSGT Allen Miller USAF RET

http://oai.dtic.mil/oai/oai?verb=getRecord&metadataPrefix=html&identifier=AD0264873
THE FEASIBILITY MODEL OF THE DIGITAL COMMUNICATIONS SET, AN/GSC-4 FIELD TEST PROGRAM

Abstract : A field test program was carried out to determine the feasibility of the AN/GSC-4 digital communications set. The AN/GSC-4 feasibility model was designed as a high-speed digital data modem for conveying binary data over toll telephone facilities. Laboratory tests were conducted to determine error rate characteristic as a function of white and impulse noise, frequency translation, and modulation. Field tests were conducted to determine the performance of the system over commercial phone lines, SAGE tropospheric scatter link, White Alice tropospheric scatter network, electronic switchboard, and the 465-L remote communications complex breadboard. The laboratory tests pointed out that a S/N ratio db was necessary for a binary error rate of 1 x 0.0001 when operating at 5400 bits/sec; for 2400 bit/sec operation, an error rate of 1 x 10 to the 6th power could be realized with SNR of 16 db with synchronization modulation. Without sync modulation a 3 db improvement is obtained. (Author)

> Roy, As far as I know there were no serious problems in getting digital communications to work in SAGE. Below are two paragraphs from an article I'm writing that somehow got oddly formatted here. It is interesting to note that AT&T played no role in inventing the modem (which was not called that initially) and even after computer networks were shown to work they refused to believe it.
>
> In 1976 the Defense Communications Agency, which had recently gotten control of ARPAnet, offered it free to AT&T if they would agree to run it for a fee. However AT&T refused, apparently with Bell Labs' blessing, on the grounds that packet switching had no future.
>
> Thus even though Bell Labs was innovating in a number of areas then, they apparently couldn't stand the idea that someone outside had superseded them in their base business.
> -------------------------------------------
> Who invented the modem?
> Given that modems are an essential part of computer networking you might expect that they were created for that purpose but it wasn't quite like that. In 1949 Jack Harrington and his group at the Air Force Cambridge Research Center (AFCRC) wanted to be able to view radar data from a remote site, but the bandwidth of radar video was too great to go over ordinary phone lines. They created a Digital Radar Relay that identified blips, located their centers and sent digital packets, one per blip, over a phone line to the display site. An improved version of the modem (which was not yet called that) was patented by Jack Harrington and Paul Rosen and became the basis of Bell Telephone's A-1 Data Service.
>
>
> Who invented packet switching?
> It was Jack Harrington's group around 1953, after they moved from AFCRC to MIT. As part of the experimental Cape Cod air defense project (the SAGE prototype) they had Burroughs Corp build the FST-2, a special purpose computer used to process data from radars for transmission to the Whirlwind computer at MIT. The same scheme was used in its offspring, SAGE. Overall the SAGE network interconnected hundreds of sites across North America beginning in 1959.
> --------------------------------------------------
>
> -Les

Actually it was quite the reverse. The early Q7's were installed with only 2 each 4K memories
fondly called "shower stalls" and the test program was called MEM01.

Later on the 65K memory was retrofitted to replace one of the 4K units
including the associated driver frames.

It was a massive retrofit and as I remember it, required the IBM team to work 7 days a week
for 6 weeks on all three shifts with extended shift lengths.
Rumor had it that each retrofit cost as much as the associated computer originally cost.

At the completion, the test programs were then called BIGMEM and LILMEM
reflecting the size differences.

SAGE computers did record radar data on magnetic drums but kept only
about two minutes worth at any given time and discarded old hits as new
data came in. These data were used by the computer to automatically
track aircraft and those tracks often were recorded on magnetic tape.
However this process would not have "seen" a diverging radar blip unless
the radar data was being displayed (it usually wasn't) and it was
noticed by the Intercept Director following that flight.

Postscript:
The electronic warfare aspect of SAGE needed to be written because it would have had a major effect
on the outcome, had we actually gotten into the hypothetical WWIII scenario. We got close to
it during the Cuban Crisis, and I sometimes wonder, "...what if..." When I was the acting ASO
the NORAD command alert speaker was right in front of my face, and its test messages were frequent.
My tech (AST) was immediately to my right, and the codebook was attached to his console's shelf.
I can still close my eyes and hear the voice of NORAD, saying "DIVIDEND, this is NORAD..."

My last year of active duty was at 30th (30-OOP-EW). One time a Major came down from NORAD
to brief us (NORAD was still at Ent AFB, not yet in the mountain), and something he said was
prophetic: He tossed his briefcase on the table and said, "Gentlemen, the day is coming when
we will be able to put the entire SAGE computer inside my briefcase." He never made a
prediction about how powerful it might be. The SAGE computer's core memory, about the
size of a telephone booth, was something like 57 K. The notebook that I'm typing on right
now has a 16 G SanDisk memory plugged into its USB port. The SanDisk is smaller than my thumb.

The Air Force electronics training was not wasted on me, and I pursued some AFOSR-funded
research between 1988 and 1998. During my graduate school days and throughout my 31 year
career as a chemistry professor (both in teaching and research) I was continually using
my knowledge of electronics. Even as a retiree I pursue electronics hobbies, designing
and building circuits. There is a seismometer in our basement; it has been running 24/7
since 2012.

- this note added September 2012
SAGE did not spring into being, from the womb of Whirlwind, with out predecessors.
At least several navies had been working on the problem of air defense coordination.
Also note the generous use of "SEMI-AUTOMATIC" in relation to the problem/solution ;-))

from the book "When Computers Went to Sea"
by David L. Boslaugh 1999 - ISBN-10: 0769500242
starting about page 50

Three Digital Attempts

The Canadian Navy's Digital Automated Tracking and Resolving System

In 1949 the Royal Canadian Navy began conceptual work on its Digital Automated
Tracking and Resolving (DATAR) system which was to be based on a digital computer, and was
also to include a digital ship-to-ship data link. They first demonstrated the UHF (ultra high
frequency) digital tactical data link in shore-based tests from their Ottawa laboratory in
1950. The RCN then installed two prototype DATAR systems in the minesweepers Digby
and Granby operating in Lake Ontario. Each system had one special-purpose Ferranti
digital computer using 3,800 vacuum tubes and a magnetic drum maim memory. The systems
filled most of the after pan of the minesweepers, and, because of the large number of
vacuum tubes. overheating was a major problem.

The Canadian builders designed the system with a capacity for 64 targets with 4U yard
resolution over an 80x80-mile tactical grid. DATAR operators, using am electronic cursor
moved by a manual 'track ball,' picked target track coordinates term radar scopes, and
entered the track data into the computer

Initial DATAR tests on Lake Ontario in August and September 1953 showed that the
Canadians were clearly the world leader in automating seaborne tactical data systems,
however, a major fire aboard one of the test ships halted testing and the project was terminated
due to lack of funds to reconstruct the destroyed system. But the project was by no means
a local loss. Mr. Stanley F Knights, the leading Canadian scientist on DATAR, would be
made available as a consultant to the later USN Naval "tactical Data System project where
ha would provide valuable technical support beginning in 1956.

Early Digital Experiments at the Navy Electronics Laboratory

In the first chapter of this narrative we followed the career of Irvin L. McNally from college
graduation in 1931 to his sudden transfer from Pearl Harbor to the Bureau of Ships in
Washington, D.C., where he reported in mid-July 1943. Here he took charge of the Shipboard
....
the course, and done extensive reading in the new field. With this combined background,
the next move for the three experimenters was a replication of their Coordinated Display
Equipment with digital technology.

If one wanted a digital computer in the early 195(h, you did not go out and buy it-
because there were none on the market. You built it. The three set out to design a
special-purpose digital computer tailored specifically for their radar data processing problem. With
help from Dr. Huskey, they built a computer having instructions for addition, subtraction,
multiplication, and division, and the ability to store track data in electronic registers. They
named the device the Semi-Automatic Digital Analyzer and Computer or SADZAC.
McCown later acquired a magnetic drum memory to expand the machine's track data
storage.

Nye and McCown also designed analog-to-digital converters to translate target coordinate
voltages to digital form for computer processing, and digital-to-analog converters to
turn the digital track storage information back into track coordinate voltages. These volt
ages positioned the synthetic target symbols on the radar scopes. Their homemade
special-purpose digital computer, in effect, took the place of the CDE's capacitor storage banks.
But now, since the target coordinates were stored in digital form, the computer could
calculate the course and speed of each target from the sweep-to-sweep changes of the target
coordinates. By 1951 the NEL investigators had developed their SADZAC-based Coordinated Display Equipment to the point where it was ready for a real-life try, and they briefed
the BUSHIPS Radar Design Branch on how it could be applied to shipboard radar data
handling and air interceptor control calculations [ 179].

The Semi-Automatic Air Intercept Control System

In 1951 the BUSHIPS Radar Branch awarded a contract to Teleregister Company to
develop an automated plotting and vector computing aid for shipboard fighter direction, to
be based on the NEL Coordinated Display Equipment. The Bureau named the device the
Semi-Automatic Air Intercept Control System (SAAICS).

Fully deployed by 1963,
the IBM-built early warning system remained operational until 1984.
With 23 direction centers situated on the nation's northern, eastern,
and western boundaries, SAGE pioneered the use of computer control over large,
geographically distributed systems.

"By the way, the AN/FSQ-32 was to have been the "SuperSAGE" computer
for planned underground SuperSAGE Combat Control Centers. IBM developed
the computer (based upon the earlier AN/FSQ-7 and AN/FSQ-8 SAGE computers),
but the SuperSAGE facilities were cancelled. One site was to have been near
Cornwall, NY -- see http://www.radomes.org/museum/documents/CornwallNYnyt59.html.
"Many sites were examined for SuperSAGE. One was at Kennesaw Mountain,
Georgia ... Another was at White Horse Mountain, at Cornwall, New York ...
White Horse Mountain is just up the road from West Point." - "Shield of Faith"
by Bruce Briggs (Simon and Shuster, 1988). Reportedly, the AN/FSQ-32 computer
itself did find other aplications -- just not SAGE air-defense aplications."

Number of AN/FSQ-32 computers manufactured: 2
Locations" 1 at SDC; 1 at IBM
The sage mentioned above for SDC was in Santa Monica, CA.
It was at 23rd and Colorado. About 2220 Colorado,
which looks like Universal Music Group now!
34 deg, 10 min, 40 sec North
118 deg, 28min, 25 sec West
I worked on it as a Field Engineer on the SDC account.
Any time! Happy to be able to remember!
Retired and live in Rio (Copacabana) these days!
Tough being single here! He he..
David Evan Young – IBM 1967-2007
Tel: + 55 21 7932 6850 (Cellular in Rio)

There were 3 SAGE Control (later Combat) Centers (AN/FSQ-8) in the U.S. Reportedly, 7 SAGE CC's total were planned.

A number of Super-SAGE Combat Centers (AN/FSQ-32) were planned, but none was built. Most were to have built underground (e.g., White Horse Mountain near West Point, NY); at least one SSCC was to have been above-ground (Scott AFB, IL). One prototype Q-32 was installed at the IBM programming center in Santa Monica, CA.

One remote SAGE Combat Center was activated at the former manual at Hamilton AFB using a three-string BUIC-II computer, AN/GSA-51. "

I recommend avoiding contact with L.. E...... -- methinks he is a complete psychotic or something. Better contacts are out there -- I recommend starting with Mr. Robert F. Martina (318-797-5419), rfjm9870 @ aol . com .

By the way, Fort Lee AFS, VA, was HQ 20th Air Division (SAGE) at the time it deactivated in 1983. HQ 21st Air Division (SAGE) was located at Hancock Field, NY, until 1983; I was there when its FSQ-7 was turned off for the final time in October of that year. The 21st AD picked up all the former 23rd AD sites when Duluth shut down a couple of years earlier.

"I wrote the system test programs for the Sage FSQ-7 output system in 1954
and 1955. Later my group worked on the RTA computer which was a precursor
for the solid-state Sage II computer.

"In 1958 I was made the Development Engineering Manager for Central
Processor, Channels, and Operator's Console for the Sage II computer. I
believe that this was the largest transistor computer ever built. It was
intended to replace the old vacuum tube FSQ-7 systems. The System
Development Corporation did our programming.

"We completed the design (which used the Philco MADT transistors -- type
2n501 if I remember rightly) and began physical layout and construction only
to have the government cancel the Sage Program. Our design tested out
beautifully, but only two machines were ever built. One went to SDC in LA
for use in programming, the other went to SAC.

"Why do I never see any reference to the Sage II computer? It was one of the
first 100% self-checked machines. It had a 48 bit word. 6.4mc clock
frequency. It used liquid cooling. SDC said in 1966 that the machine they
had was the most reliable and maintainable they had ever worked with."

"Tom Marill and Larry Roberts performed the first computer-computer network experiment between the Q-32 and TX-2 (I believe).

"See Larry's page http://www.ziplink.net/~lroberts/InternetChronology.html says:
Oct-65 First Actual Network Experiment, Lincoln Labs TX-2 tied to SDC's Q32, Lawrence Roberts, MIT Lincoln Labs. This experiment was the first time two computers talked to each other and the first time packets were used to communicate between computers. "

SAGE Reunion - received May 2007

The Western Electric Defense Activities Engineering Services (ADES) Alumni Group held its 25th reunion in Houston this past weekend, April, 27-29. These were the people who integrated and tested the SAGE system at 23 sectors covering the USA back in the late 50's and early 60's.

About 500 engineers and other technical personnel were hired by ADES, trained at MITRE/Lincoln Labs, formed into five teams moving from sector to sector, integrating and testing this first big network of radars, computer centers, air bases, other inputs and ground to air data links. It also tied into the NIKE complex of ground to air missile sites.

Now all in their 70's they still recall the lure of the open road and the early days of computing and data transmission.

I deployed to College Eye in 1969. We deployed from McCellen AFB,Ca on a C-141. We flew
to Wake Island and refueled and proceeded to Clark AFB, PI. another refueling stop. then on to
Tinan,Tiawan to attend a briefing prior to flying to Korat, Thialand.

Our missions were Combat
Support. We were usally flying about every other day for about three weeks, then we would go back
to Tiawan for R&R for a few days, from there we would proceed to our Northern operating location,
Itazeke AFB, Japan. We would fly our missions from there in the middle of theYellow Sea off the west
coast of Korea. we would only be there about 10 days then rotate back to Korat. We were there on TDY
orders for 180 days. But there were changes later they moved the mission up North to Korea. And
the earlier missions were flown in the Gulf of Tonkin from Viet Nam. You had to keep your bags packed.

I logged about 6500 hours in the
EC-121s over a period of 8 years.
3000 hours in my first three years. then
they gave us certificates for each thousand hours. I still have the certificates. I was stationed at McCellan AFB, Ca
in the 964th AEW&C SQ. Twice and the 966th SQ at McCoy AFB, Fl. I was also, deployed twice to the Cuban crisis
and flew many
Gold Digger Missions. Later I was deplyed to Iceland twice.

Hofn, Iceland. The 932nd H-3 had been deactivated in 1960 and was being closed in January 1961.
At that time I was a SSgt, radar maintenance, at the Langanes, Iceland, 667th H-2 site.

The H-2 unit was very remote with weak and unqualified mid-level management within most major work
centers. Unit Command was so-so at best. This lack of leadership was the direct cause of the search
radar system and the protective radome being blown over the side of the radar tower during a winter
storm in January 1961. Against the advice of the onsite contract Technical Support Representative,
senior management actually stopped the maintenance crew from repairing a small radome air leak found
during normal preventive maintenance inspections that morning. When the storm and high winds arrived
that evening/night it didn't take very long for the change in radome air pressure and the force of
the winds to turn a small pin hole leak into a large rip; thus, the radome and radar went over the
side of the tower. In February 1961 the Unit was relocated to Hofn, Iceland and reactivated the search radar system.

When the bubble went over, I was on break. A/1C Odber D. Smith was radar maintenance crew on duty.
A SSgt Lee Cobb and A/1C John Johnson were the other crew chiefs. Not sure who was on the day/night shift.
Other personnel (will remain unnamed) were a TSgt, the NCOIC of radar maintenance. His assistance, a SSgt,
a CMSgt Radar Superintendent. In 24 years the TSgt was the worst supervisor I ever had or met. The CMSGT was
totally new to the AC&W radar field and just out of his element.

Why the bubble went over. That morning was a normal maintenance day. The crew on duty seen a pin hole
in the radome during their inspection and started to make the repairs. The TSgt stopped them as he was
not taking the additional (RED) down time to make the patch. The Bendix Tech Rep went to the CMSgt;
but, CMSgt backed TSgt. Should never have happen. I have seen winds just as bad or worst on Resolution
Island, south of Baffin Island and Cape Romanzof, Alaska. Those bubbles were solid with correct air
pressure applied on the inside.

Smith told me the TSgt tried to stuff rags in the rip and they just made it out of the dome prior to dome
going over the side. I watched the antenna and lights come through the dome and the wind push them over
the side from the hall way opening for vehicles to drive through.

We made the move to H-3 by way of Keflavik after two flights to H-2 and unable to go on to H-3, we flew to
Keflavik and spent 10 days in the air terminal. We were a rag tag looking bunch and kept the Commander busy
running interference with the Air Police that were constantly walking the terminal.

I worked with Roland, Rollie, [Roland D] Pampel who was the main software person,
and Bob Suda (software) when they developed the first ever system program called SEVA for the SAGE system.

System Evaluation Validation Acceptance = SEVA

SEVA was a highly couples software program, that would perform a make believe attack on America to show the system was ready to be accepted by the Air Force and to be shipped and deployed at a SAGE site

SEVA was the work of two geniuses and only I and Russ Burger (engineers) where the only ones that could de bug it during it inspiration.
Those days of designing and working on SAGE where the best days of my life

A huskier version of the AN/FSQ-7 was also built, the AN/FSQ-32. There was also Whirlwind's little sister, built by the bright boys in the old Whittemore Shoe Polish factory around the corner on Vassar Street, which was called the Memory Test Computer or MTC, or sometimes referred to as Whirlwind 1 1/2. The MTC was built specifically to test Jay Forrester's magnetic core memory (see the free download of Chapter 6) before installing it in Whirlwind. Later, an all-transistor version (3,600 transistors) of the MTC was built, called the TX-0. In 1957, the 22,000 transistor TX-2 replaced the TX-0.

It was the TX-2, at the helm of which was Larry Roberts, that first sent digital packets of information across the continent to another of the Whirlwind progeny (AN/FSQ-32) in California. That, of course, began Arpanet which lead to the Internet.

A recent 2010, and very readable, book also has some information:
"The Department of Mad Scientists: How DARPA Is Remaking Our World, from the
Internet to Artificial Limbs" by Michael Belfiore

"In the post-NASA world, ARPA emerged as a sort of dumping ground for military programs that could find no other home. In 1951, the Air Force needed to unload what one former ARPA staffer called an expensive white elephant (in the form of a major piece of computer hardware called the AM/FSQ-32DIA, and the fledgling R&D agency ended up with it.

"The 250 ton machine had been built by IBM as a spare for he Air Force's Semi-Automated Ground Environment, or SAGE, Program."

... I still intend to submit a proper
white paper. However, in the interim, below is a brief summary of my SAGE investigations
and facts that I've validated. The focus is to correct ubiquitous misconceptions by docents and references in CHM and other records. As a result of my contacts, IBM has changed some Webpages to reflect the actual number of SAGE computers that were built. We should consider doing the same if the information on our pages that don't agree with the validated information..

After reading literally hundreds of pages, and having emails and telcons with responsible persons at IBM, MITRE, Lincoln Labs, and other places, it is clear that we should never rely on the memories of just one person and what they might write many years later without other confirming sources. I've found this to be true in the two early aviation history books I'm completing. It's astounding the variation in 'facts' between people who are considered to be responsible historians.

I've located records and responsible persons for all SAGE elements except for the RAPPI manufactured by Lewyt Electronics, subsidiary of Lewyt Vacuum Cleaner company. Many are kept on servers that CHM might consider asking for a data transfer to ensure future retention of the information.

I had been in contact with Ms. Mary Mullins - public relations VP at Thyssen-Krupp, eventual successor to Lewyt, asking for help in finding any archives. Unfortunately, she bailed because of the flame war emails engendered by by a previous summary I sent out. At this point, any contact with Thyssen-Krupp would have to be at John's level to their subsidiary CEO, given what happened.

I found a RAPPI reference that said it was designed in 1955 with production starting shortly thereafter. If this can be validated, the LEWYT RAPPI could rightfully be called the first production transistorized computer., or at least a transistorized signals processor.

Roy

Summary:
Number of Direction (Sector) Control Centers 23 (22 if Thunder Bay isn't included)
Number of Combat Control Centers 4 ( 5 if Thunder Bay is included)
(Role of Thunder Bay isn't clear in items I reviewed)
Number of AN/FSQ-7 and AN/FSQ-8 computers manufactured 56
Locations of SAGE computers:
27 combat control or direction centers 2 at each center / total = 54
Programming Support Center 2 at Systems Development Corp. (SDC)
(A Vinton Cerf interview seems to say that he saw only one @ SDC)
Number of AN/FSQ-32 computers manufactured: 2
Locations" 1 at SDC; 1 at IBM
(One source states that 1 system went to the CIA)
Air Traffic Control Use:
Despite the fact that the SAGE systems and its supporting radars acted as a de facto
North American air traffic control system, only 1 direction control center was ever a part
of the FAA system. It was later supplanted by a new system under FAA control.
Great Falls Montana Center/Malmstrom AFB system was used in early days due to cost
of a separate FAA system and because air traffic in the Dakota/Montana corridor was
so light. FAA history on the development of the civilian air traffic control system
virtually ignores any Air Force role in ATC.
Number of Building Stories
Some were three stories and some were four. There is a complete list available.
Note:
Haven't been able to find a definitive summary about how Cheyenne Mountain operated as
a Combat Operation Center, e.g. how their computers worked with SAGE and the type of
computers used. It must exist, so I'll keep looking during my recuperation.

Discussing SAGE is part of tmy tour lectures. A visitor asked about my assertion that
there were 28 locations where duplexed AN/FSQ-7's were installed; e.g. 56 computers.
My reply was that I had researched the subject but I would do so again just to make sure.

In reviewing my original SAGE research and doing new research, I found errors in a variety of places. it is expected in non validated posting, but I found errors in many places including various SAGE listings at the Computer History Museum Website, even at IBM.

I'm attempting to get validated information from all SAGE related vendors, and from archives where the ocmpanies no longer exist. I have a reply from MITRE and am awaiting replies from MIT/Lincoln Labs, AT&T, Western Electric, and Systems Development Corporation (Burroughs/UNISYS) archivists.

I also have a reply from another IBM office. However, they were unable to provide a complete answer.

Here is what I found on a search of IBM history Webpages:

"When fully deployed in 1963, the system consisted of 27 centers throughout North America..."

The number is correct when considered operational Air Force locations - 24 combat direction centers and 3 combat control centers. 27 centers x 2 = 54 computers.
2 additional AN/FSQ-7 computers were installed as a programming support center at RAND/Systems Development Corporation in Santa Monica, California.

I also found a reference that seems to be in error about the relationship of SAGE to the MIT/Whirlwind. Finename = teraflopattackilluminata.pdf. URL = www-03.ibm.com/servers/deepcomputing/pdf/teraflopattackilluminata.pdf

"IBM's been before. Its Whirlwind II used 55,000 vacuum tubes. "

Other sources state that Whirlwind II, as such, was never built. SAGE was sometimes described as Whirlwind II, but this is incorrect according to other sources.

My objective is to obtain documentqation from validated sources to use in preparing an accurate catabase of SAGE information. The computer History Museum has become a principal source for historical research on computers. As a docent, I;m spending time because our arhivisist can't spend the time I've expended on trcking down accurte information.

>> I was one of the Airman that Blue Suited the Q-7 at
>> Malmstrom AFB, Great Falls, Mt in 1963.
>Question:
> 1) what is "Blue Suited"?

When the SAGE project first became an active weapons system for the Air Defense Command, the maintenance on the FSQ.-7 (&8) was preformed by IBM. In the early 60's the Air Force decided to take over maintenance or "Blue Suit" maintenance. It was a term used by the Air Force (at least at that time) to signify that Air Force personnel would be doing the job instead of civilian personnel. Did I clear that up or make it murkier?

>> Spent three years working in the Central Computer section
>> of the Q-7.
> Hmmm - sound like "Blue Suited" is maintenance?
> A person trained/specialized in one section?

When I first went into computers in the Air Force (I cross-trained out of air-craft radio maintenance), it was divided up into three sections. We were some of the first in the Air Force to be in the new field, computers. The section you were in was determined by an IQ test. If you did good in logical thinking, you were assigned to the central computer section. For what is a computer, but a logical thinking machine. If you did good in mechanical, you were assigned to input/output. That included card readers and card punches, printers, tape drives and computer entry punches. Plus the logic that controlled the input/output between the Long Range Radar sites and the Q-7. And the logic for the X-tell (cross talking) between the other Q-7 sites and the forward-tel and back-tel to the Q-8 sites. The FSQ.-7 was a direction center and for every so many Q-7's there was a FSQ-8 which was the control center. From there it went on up to NORAD.

They later determined this was not the way to break down the maintenance, as the computers and the peripheral equipment became more sophisticated. The transition from electron tubes to transistors and then on to chips made the computer so small that it just was not feasible to divide the maintenance up anymore. So you worked on everything as you were assigned from one system to another.

> Got any "war stories" that techies might enjoy?
>> The other two sections being Displays and Input/Output.
>> I got to work on the Q-7 in its final days at Luke AFB, Az.
>> in the early 70's. I was only there for about a year or so.
>> It wasn't nearly as exciting as the my first time up at Great Falls.
>> I had worked on a whole lot newer computer in the mean time,
>> but not physically bigger.
> Easy to believe ;-)

Also, you were actually in side the computer when you preformed maintenance. Everything was bigger than life with the Q-7. So you could, with an o-scope, look at each and every bit of the word as it worked its way through the computer. It was really a simple machine to work on, when I compare it to later and physically smaller, but much faster systems. I did work on other large systems, the Philco 2000 and 1000 at NORAD's Cheyenne Mountain Complex and the IBM 360 and 370 at a satellite monitoring site in Australia. But, even if they were larger and faster computers, it just wasn't the same as the old Q-7.

>> The Q-7 was the easiest computer I worked on,
>> more forgiving of my mistakes.
>> Dale Williams
>> blackkoko22@yahoo.com

I hope this clears up some of the questions and just doesn't generate a whole lot more. But if I can answer any other questions I will certainly try. It does strain the old memory going back 40 years to remember things. But it is fun remembering.

Dale

From Les Earnest Mar 2009 - replying about ARPANET in INFOROOTS
Les is not overly shy ;-))
This article is for those who think that
large organizations are/can-be efficient. No exceptions are discussed here :-|

Sue Thomas wrote:
>
> As per my earlier posts, I’m researching the influence of California
> on the development of the environment we now know as cyberspace. [
> http://www.thewildsurmise.com ] I’ve just read Annalee Saxenian’s
> ‘Regional Advantage’ about the cultural differences between east coast
> and west coast tech industries, and that has led me to wonder whether
> it would have made a huge difference to the development of the
> internet if the first few nodes had been based in east coast locations
> (apart from the obvious technical issues which had made the selected
> groups the best choice). Maybe the idea was even considered then
> discarded?
>
> To refresh your memories, the first 4 nodes of Arpanet were in Los
> Angeles, Menlo Park, Santa Barbara, and Utah. Any thoughts on possible
> alternative hosts on the East Coast – or other parts of the US – along
> with speculations as to whether anything would have been different,
> and why?
>

=======================================

In my view the East-West question doesn't make sense at several levels.

First, there was no 4 node ARPAnet, though some now like to think
there was. The first four nodes were designated as a test rig, composed
of sites that were willing to shake down, debug and measure the
performance of the first packet switching schemes. The first operational
network was to be transcontinental and have 8 nodes though things got a
bit mixed up before all of the early sites got on line.

The first nodes connected were not Los Angeles, Menlo Park, Santa
Barbara, and Utah. They were UCLA, SRI, UCSB, and U. Utah. It didn't
matter where they were located.

ARPAnet was based principally on technology developed on the East
Coast, specifically at MIT.

I believe I am qualified to comment inasmuch as I was at MIT in the late
1950s when the underlying technology was developed there and was the
Stanford representative on the ARPAnet startup committee during 1967-68.
For the record, I'm a West Coast guy who went East for 12 years.

It appears to me that there were five key steps that led to the creation
of ARPAnet building on the 1950 technology base provided by general
purpose computers and telegraph and telephone systems:

(1) development of high speed digital communications;

(2) development of computer timesharing;

(3) recognition of the need for an integrated network;

(4) proof that partially connected networks would work;

(5) development of packet switching.
Steps 1, 2 and 4 happened at MIT; 3 and 5 were done mostly by people
from MIT.

HIGH SPEED DIGITAL COMMUNICATIONS

The first computer network was part of the SAGE air defense system,
which was initiated by MIT Lincoln Lab in the 1950s. SAGE used modems
that had been invented nearby in 1949 by Jack Harrington and his group
at the Air Force Cambridge Research Center (AFCRC). SAGE became a
nationwide network connecting 23 gigantic computers, one being in
Canada. I use the term "gigantic" in the physical sense inasmuch they
were the largest computers ever built. Each had about 55,000 vacuum
tubes and occupied an area the size of a football field. Never mind that
as an air defense system SAGE was a fraud that cost taxpayers billions
of dollars and was a cornerstone of the military-industrial complex that
has since bilked U.S. taxpayers out of many more billions. That
conspiracy is still going strong but it's another story.

SAGE used digital communications to collect radar data from remote
sites, transmit guidance commands via packet radio to manned
interceptors and ground-to-air missiles, and to send tactical
information to adjacent control centers and to higher level
command-control systems. However all these links were special-purpose.

TIMESHARING

Another thing that had to be invented before ARPAnet became worthwhile
was timesharing, since without it there would have been no need for
interactive networking until about 20 years later, when personal
computers became feasible. Timesharing was an accidental invention in
SAGE, which processed radar data cyclically and put keyboard
interactions and display generation in the same loop. That was a
special-purpose kind of timesharing but John McCarthy, who was then a
professor at MIT, foresaw the need for general purpose timesharing and
proposed it in 1959. Subsequently several timesharing projects in the
Boston area confirmed its feasibility in the early 1960s, the first
being CTSS at MIT. The first commercial timesharing.system was the
PDP-6, developed in 1964 by Digital Equipment Corporation, a spin-off
from MIT Lincoln Lab.

RECOGNIZING THE NEED FOR A GENERAL PURPOSE NETWORK

The first person to clearly enunciate the need for a general purpose
computer network was J.C.R. Licklider, or "Lick" as his friends called
him. I first met Lick in 1949, when he gave me a summer job as a guinea
pig in one of his experiments. I ran into him again when I joined MIT
Lincoln Lab in 1956 to help design SAGE. Lick later became a key
scientist at Bolt, Baranek and Newman (BBN), where he supported the
development by Ed Fredkin and John McCarthy of an early timesharing
system on a DEC PDP-1 computer. In 1962 Lick joined the Defense
Department's Advanced Research Projects Agency (ARPA) and founded its
Information Processing Techniques Office (IPTO). A short time later he
proposed building an interactive network linking existing timesharing
systems. Lick didn't know exactly how to build such a network but left
it on the IPTO agenda when he returned to MIT and kept pushing for it --
see http://www.kurzweilai.net/articles/art0366.html?printable=1

PROOF THAT PARTIALLY CONNECTED NETWORKS CAN WORK

The next two steps in ARPAnet development came out of a group of MIT
graduate students who spent evenings and weekends in the early 1960s
sharing the TX-2 computer at MIT Lincoln Lab. TX-2 had been designed by
Wes Clark, mostly using modules that had been engineered by Ken Olson
before he left to found DEC. Some of the students involved were Ivan
Sutherland who was developing his Sketchpad drawing system, Larry
Roberts who was working on perception of three dimensional objects from
photographs, and Len Kleinrock who was doing network simulations to
investigate queuing theory for various configurations of partially
connected networks. I was there too, creating the first cursive
handwriting recognizer, which included the first spelling checker as a
subroutine. We all helped each other occasionally and became friends.

Kleinrock completed his PhD in 1963 and showed that a partially
connected network could provide adequate throughput between any pair of
nodes. He then accepted a faculty appointment at UCLA. Concurrently,
Paul Baran at Rand Corporation was looking at networking from the
viewpoint of survivability in an environment where links could be taken
out and concluded that a multipath network would be more survivable than
the tree-structured networks used in military communications systems. He
attempted to get funding to build such a system but was unable to get it
funded. Later Donald Davies in Britain also advocated a packet switching
scheme but also was unable to find funding.

Ivan Sutherland finished his dissertation in 1963 and in 1964 was
recruited by Lick as his replacement, so that Lick could return to MIT.
Larry Roberts had also finished his dissertation in 1963 and hung around
Lincoln Lab. Ivan followed up on Lick's idea of creating a network by
funding Larry to put together a link between two timesharing systems,
the TX-2 at Lincoln Lab and the AN/FSQ-32 at Systems Development
Corporation.

Meanwhile I was loaned by my employer (MITRE Corp., an MIT spin-off) to
the Central Intelligence Agency for a year and then to the Joint Chiefs
of Staff to work on more ill-conceived projects. Given that Ivan and I
were both in the Washington area we socialized occasionally and, in
1965, he tried to recruit me to join him at ARPA. I politely declined,
saying that after working in the military-industrial complex for over a
dozen years my goal was to get as far from the Pentagon as possible. He
then kindly suggested that I talk to Stanford, where he had just funded
a new million dollar computer facility for artificial intelligence
research but then had second thoughts about project management there. I
followed that suggestion and soon joyously left for Stanford. I learned
later that Ivan had also tried to recruit Larry Roberts, who also
declined, but was able to get Bob Taylor to come from NASA.

DEVELOPMENT OF ARPANET

When Bob Taylor took over IPTO at the end of 1965 he decided to move
ahead on creating a network and realized that he needed someone with
expertise to lead the project. Apparently based on suggestions from Lick
and Ivan he recruited Larry Roberts. Perhaps more accurately, he coerced
Roberts by leaning on his employer, MIT. Upon joining ARPA Roberts put
together a start-up committee composed of representatives of sites that
we interested in participating. I participated representing the Stanford
Artificial Intelligence Lab (SAIL) even though my boss, John McCarthy,
had major reservations about this possibly intrusive project.

We started formulating packet designs and our original plan was to have
each timesharing system talk directly to its neighbors over the network.
However Wes Clark, who had been the architect of TX-2 and other things,
then made the excellent suggestion that minicomputers be used to handle
packet switching. Those machines, which we called Interface Message
Processors (IMPs), would then talk to the main computer through a
separate interface.

We developed performance specifications for the network that focused on
two functions: file transfer and remote access, which came to be called
"Telnet". As I recall we discussed doing email briefly, given that it
was already available in some timesharing systems, but rejected it as a
frivolous use of the net -- after all, we already had U.S. Mail (!).

Though we were somewhat off-target in our initial choice of services it
turned out well in the long run. When the need for email services was
recognized a few years later it was easily provided using the file
transfer capability and when interactive web applications began to be
developed about 35 years later the short round-trip communications
delays specified for Telnet proved adequate for these new applications.

As I recall the Request for Proposals was issued in the summer of 1968
and our committee reviewed the resulting technical proposals at a
meeting in November that year in Monterey, California, at the Del Monte
Hotel, which I had arranged. Of the dozen or so that were submitted
there were two standouts, from Raytheon and BBN, both from the Boston
area. The consensus evaluation of our committee, based just on technical
issues, not financial proposals, was that Raytheon was the better
choice, though I thought BBN had done a better job and said so. Perhaps
I was influenced by the fact that a substantial number of people in the
BBN group had recently come there from Lincoln Lab after working on
SAGE. In any case I was happily surprised two months later to learn that
BBN was selected as the contractor. I later tried to find out how that
happened but got conflicting reports.

As soon as the IMP interface specifications were developed by BBN we all
started working on making that connection. I got one of our graduate
students (perhaps Phil Petit) to design the hardware and another (Andy
Moorer) to write the operating system software. However we then ran into
a brick wall. Our operating system, which was closely related to DEC's
TOPS-10, required that the entire system be resident in main memory,
which was then core. Unfortunately the addition of the ARPAnet interface
software made the operating system so large that there was not enough
room to run user programs! I therefore had to round up more funding, go
out for bids on more core memory and get it installed before we could
connect to ARPAnet. Thus even though we were supposed to be one of the
elite initial eight we were unable to connect until some months later.

Even then ARPAnet was not a very lively place, though the pace picked up
a lot after email was added. Most sites left many of their data and
program files publicly accessible and a lot of benign thievery went on,
which was fine inasmuch as nearly all of the participants were
universities. For example after I recruited Ralph Gorin to make an
improved spelling checker around 1971 it soon spread over the net to
most DEC-10 and DEC-20 computer facilities that were on the net.

After I wrote FINGER, which provided a kind of social networking service
and had a proto-blog capability (see
http://asia.cnet.com/reviews/pcperipherals/0,39051168,61998604,00.htm),
it soon spread everywhere. Unfortunately the Unix version, written at UC
Berkeley, had a security vulnerability that was exploited by the first
Internet Worm, launched in 1988 from MIT by Robert Morris. Happily,
FINGER was used more constructively by other people, including Linus
Torvalds who reportedly used his ,plan file to coordinate the
development of Linux.

When Vint Cerf finished his PhD at UCLA in 1972 and came to Stanford, I
helped him round up funding for his network protocol research project
that produced TCP/IP, which facilitated the integration of disparate
networks into the Internet beginning 1 January 1983.

In summary, the trajectory of ARPAnet turned out to be a somewhat bumpy
but it demonstrated the practicality of packet switching and was close
enough to what was needed that it was able to evolve useful services. In
the beginning all telecommunications companies scoffed at the idea that
packet switching networks would work but they have now largely switched
over to using this technology. It is amusing to note that in 1976 AT&T
was offered the chance to take over ARPAnet with no up-front cost if
they would agree to run it, but they refused on the grounds that this
technology had no future!

-Les Earnest

Sue Thomas wrote:
>
> As per my earlier posts, I’m researching the influence of California
> on the development of the environment we now know as cyberspace. [
> http://www.thewildsurmise.com ] I’ve just read Annalee Saxenian’s
> ‘Regional Advantage’ about the cultural differences between east coast
> and west coast tech industries, and that has led me to wonder whether
> it would have made a huge difference to the development of the
> internet if the first few nodes had been based in east coast locations
> (apart from the obvious technical issues which had made the selected
> groups the best choice). Maybe the idea was even considered then
> discarded?
>
> To refresh your memories, the first 4 nodes of Arpanet were in Los
> Angeles, Menlo Park, Santa Barbara, and Utah. Any thoughts on possible
> alternative hosts on the East Coast – or other parts of the US – along
> with speculations as to whether anything would have been different,
> and why?
>

The following is from Peter A. Goodwin
placed here because of an "altercation" about the usefulness of SAGE vs jamming -
this hint might eventually help determine if/how_much SAGE input could handle jamming
and prevent/reduce "GIGO" (Garbage In Garbage Out).

BMEWS: There were two 7090s in each installation. Raw radar data were digitized and fed to the computers for threat analysis. The radar data consisted of target position -- elevation, azimuth, and range -- radial distance = radar-echo-return time), and range rate (a Doppler-effect measurement). GE made the radars, Sylvania made the gear that performed the digitizing. IBM's ASDD Mohansic Lab made the 7090 radar-data real-time data channel boxes. Both 7090s operated on all data, but only one reported to NORAD; the idea was that if either machine failed, the other would be ready to perform actively. The 7090 program that performed threat analysis had operating levels to avoid being overwhelmed with incoming data: If the data stream was moderate, it would perform thorough analysis; if the data stream became heavy, it would only perform cursory analysis, the theory being that by that time, the world was going to hell anyway so who cared? The 7090 program was written in assembly language in order to optimize size and speed. The instruction code was written to be non-volatile, and every couple of seconds the system would perform a Hamming checksum to ensure that nothing in the instruction code had been altered.

I was a computer repairman (30551) on the SAGE Q7 in Duluth after being trained in the
(as I recall) first cohort of USAF “Project Bluesuit” students for about 9 months at
the Kingston NY IBM plant.

One feature that I don’t see mentioned in much detail has to do with reliability and the use
of dozens of motor-generators to produce all the required voltages for the various logic
sections — like the plate voltage for the Memory Buffer Register, or the some grid voltage
for the Program Counter. The computer itself controlled the motor-generators, and when we
ran diagnostics “under margins” the diagnostic program would cycle and keep lowering a
voltage until a failure might occur. If a failure occurred, we would replace the failing
Pluggable Unit, and run the diagnostic again under margins, until no failure occurred.
I don’t think we ever had an complete outage (both computers dead) at the Duluth site.
As I recall, it was practice to switch over daily so we could run maintenance on the
machine that had been active.

The logic tubes were all pentodes designed so that the transconductance would abruptly
crash at end-of-life, but would crash early under lowed voltage. By this means we could
anticipate future failures and replace the soon-to-die tubes before failure.

There is a picture of the console that shows some switches and a 4-5” meter. That was
the control panel for running under margins.

====

Another feature that is worth mentioning is that there was panel with a rotary switch near
the upper left corner of the console that had a speaker, and audio amplifier. The switch
could select which of a few bits of the accumulator to listen to. I was computer operator
for about a year and could tell from the sounds whether the active computer was beginning
to fail. Normal operation had a distinctive cadence due to the 15 second real-time loop
with its 3 segments of 5 seconds each. Experienced operators and repairmen could gauge
how much Long-Range or Gap-Filler radar data was being processed. Failures usually
resulted in distinctive noise-like sounds. As operator, if I started hearing suspicious
sounds, I would call upstairs to the blue-room to see if the scope operators were seeing
anomalies. If so, we would start a controlled switchover that would pass active data
to the standby computer and then switch over.

I was there during the Cuban Missile crisis ready to fight the air conditioned war.

It is breathtaking to remember walking around inside a 32-bit computer knowing that todays
IC geometries are as small as 7 nanometers—20 B transistors in some chips. My hearing aids
probably have more transistors than the Q7 had tubes.

====

I was also the designated tour guide who took visiting brass or nuns around and explained
how the system worked.