Moving Forward Through the Past

It is important to periodically look through history, so that we are not
doomed to repeat it. Here are a few examples of what problems computing
pioneers had to deal with, and why we may be regressing today.

This semester, I've been taking a History of Computing course that has
made me take a look back at the accomplishments of our predecessors in
this field. It's been interesting to find parallels between today's
problems and the problems that were being solved 25, 50, or 100 years
ago. I've also taken other classes that have forced me to take a closer
look at programming and the other processes involved in making computers
work.

Back in the early days of electronic computers, they weren't reliable at
all. ENIAC had thousands of tubes. Considering the fact that tubes
usually only last a few thousand hours, this becomes a problem. The
ENIAC folks did come up with a number of techniques to get the creme
de la creme of vacuum tubes. They created special testing
equipment that would help detect tubes that would die early, and they
kept the tube heaters on most of the time, since tubes would usually
fail when they were turned off or on. (The same problem I have today
with light bulbs...)

The effort to improve reliability carried over into many other
projects. Thanks to some people that were completely obsessed with
getting it right, a number of great products were generated. The early
nodes of the ARPANet were based on the military-hardened versions of a
Honeywell computer. A representative of the company once took a
sledgehammer to one of these machines at a trade show to show how much
of a beating it could take. (Of course, it ended up that the
over-engineered cabinetry hindered repair work a bit, but it shows how
these folks wanted reliable systems.)

Bolt Beranek and Newman (BBN) created a lot of interesting technology
for the ARPANet (BBN was contracted to build/operate it). The nodes on
the network were able (not in the beginning, but after a time) to load
new copies of their software from neighboring nodes, meaning that no
paper tape or anything would have to be distributed. They created all
sorts of other tools for testing network integrity. Their line testers
were better than AT&T's (they provided the leased lines), and could show
if a particular link was on its last legs.

In addition to the reliability factor, many folks were very interested
in how to best display information. You may know this better as the
``Command and Control'' or ``C&C'' problem. How much or how little
information do you need in order to be able to make a `command
decision?' Computers are not just around to crunch numbers or produce
documents, they can also be programmed to help us see the big picture
without telling us everything by giving us the `important' information.
Also, this implies a good amount of interactivity and the ability to
get, process, and represent information in a real-time or near-real-time
manner.

Today, I'm not sure if we're living up to the standards laid down by
those before us. Reliability is and always will be a problem.
Professionals in the computing field just need to make sure they put the
effort into thinking before doing. I don't subscribe to many of the
Software Engineering ideas -- I just can't justify the overhead involved
in keeping track of my time down to the minute, or producing document
after document describing my thought processes. However, I now
understand how much planning and design is required to produce something
that reeks of excellence. The weekend hack has its place, but I think
we should always try to produce things that will stand up over time.

A lot of effort has been put into changing software so that it drives
web pages rather than text or graphical interfaces. The problem is that
web pages really go against much of the preceding work put into the C&C
problem. They aren't real-time, and many are designed extremely poorly
(of course, many web pages are based on interfaces that were poorly
designed in the first place).

Many people have lost sight of what computing technology is really
supposed to do. What did your computer do for you today? I read a few
web pages (mostly news) and read some e-mail. That doesn't sound very
different from life on the ARPANet back in 1972, when people could read
the AP newswire online and some crazy netheads would carry around twenty
pound `portable' terminals so that they could dial up and read e-mail.

Lastly, I'm concerned about what seems to be a lack of openness. This
is a two-way street, of course, as some are finding it more beneficial
to become more open about their technologies, and others are trying to
see how much money they can get by tightening their grip on proprietary
devices and techniques. Personally, I don't think our society can
advance without becoming more open. It's striking to discover
similarities between a speech made two thousand years ago against the
Roman occupation of what is now Great Britain and an online rant of
today against corporate conglomerates or other large foes. When we
cannot remember what has come before, technologically or otherwise, we
are only doomed to repeat it again and again and again.

Now, what do you want your computer to do for you today? Some
early pioneers wanted computers to help us be more involved with
government. In many ways, this has been successful as laws, court
cases, and other documents have found their way onto the web. Computers
also took over many of the jobs done by secretaries and other clerks.
My History of Computing professor has found, though, that he is
now doing much of that work, not his computer.

Certainly, I've been taking a somewhat dim view. Our society has
definitely progressed. However, it's important to remind ourselves of
the accomplishments of the past so that we can properly focus our
energies in the future.

See Rob Pike's related commentary. Universities contribute a
majority of the public computing research, it only seems natural that
if systems research is stagnating in the universities than the systems
we use are also staganting. To me at least, it seems the state of the
art has stagnated, and maybe in some areas regressed. Unix for example
is a beautiful system, however it is 30 years old. And now we trying to
boot strap very different system ideas on the Unix model. I know the
main reason for this is compatability, but eventually we will need a
new system model to adapt to our changing use of the computer. We also
need to look at new ways of interacting, and not formulaic user
inteface concepts of the past.

one of the people at unisys described to me the history of the
development of their servers.

they started off, twenty or thirty years ago, providing massive raid
reliability disk storage systems, when a hard disk would typically have
one failure every few days [yes, that's right. 12-in winchester hard
drives that, if you inserted the 16 platters incorrectly you had better
take cover when they wound themselves up].

The Rules Are: You Do Not Answer That The Data Has Been Written Until
You Have Written It To Several Disks And Then Verified *ALL* Of Them.

[ which is bit of a bugger if, twenty five years later, you're writing
an SMB server, with client-side write-back cacheing built in to the
protocol :) :) ]

this was for use by banks etc. with the kind of data that, if it's lost,
the bank stands to lose several million or several billion dollars. i
mean, that data *is* the money. so you ABSOLUTELY HAVE to know it's
written to disk.

now, take that up to the present day. using advanced modern raid
systems, you now have reliability guaranteed to a staggeringly stupid
degree of probability. so they basically built upon the lessons learned
from old techniques, and just took it one stage further.

what else. i read a science fiction book fifteen years ago by...
asimov, i think. the basic premise of the book was that the crew of a
craft lost their computer. instead of panicking, fortunately, they had
someone on-board who was a maths professor, who knew the principles of
astronomy and the laws of motion. so, what they did was, they learned
how to do sines, cosines, tangents etc. and they got to the point
where, after a few months, they could do massive amounts of computation
in their heads, just eating and sleeping numbers. they had the charts,
they worked out where they were: over time, they worked out their speed,
they worked and worked and worked.

and they computed their way home, in a massive, collaborative, parallel
algorithm effort.

even FIFTEEN YEARS ago, the age of machine computing was heralded as the
breakthrough to calculations. everything could, of course, be done
seqentially, by a computer, much faster than human beings doing the same
computations, in parallel, right? [remember the slide rule?]

right.

wrong.

it became very apparent that the application of sequential programming
techniques was superceding the art of *manually* performing and using
parallel algorithms, with the result that such parallel algorithm
techniques became lost.

and now, of course, we know better (*duur*), but it's too late: the
people who used to know such parallel algorithm techniques because they
lived them every day have retired... and now we have to rediscover
them.

another book, again by Asimov, called "The End of Eternity". Computer
Harkan is one of the characters. no, he's not a machine, he's a human.
a very respected individual.

"Computer" is a title. a bit like "Dr" or "Professor". these are the
people responsible for performing "Computation". they are *highly*
respected.

consumer age, and all that stupidity i'm afraid. it _is_ possible to
take all the air and replace it completely with argon: this is not done,
such that the lightbulb is guaranteed to fail, and you will go spend
your money.

[by contrast: a small amount of mercury is placed in the tube, when
constructed. a tiny coil is fitted with the *sole* purpose of
flash-heating that mercury such that it turns to mercury oxide with any
remaining air in the tube, creating near-as-dammit the best vacuum they
could get].

[i'm really sorry: this article seems to be dredging stories out of me
:)]

the Royal Navy issued a commission in about... 1850? the purpose:
navigation charts were expensive to compute, and even more difficult to
copy accurately. pages and pages of numbers. they had discovered that
not only were some of the charts calculated incorrectly, but *one in
three* had copying errors. translated into actual movement, an error in
the fourth decimal place could result in being a hundred miles
off-course to your destination.

_not_ funny.

their commission? a) to build a machine capable of performing accurate
computation [well, duur, i think we've got that one down] b) to *print*
the results of the computation accurately.

WYSIWYG? pffh! don't make me laugh :) printing is a pain in the
neck! double-sided, A4 or US-letter, colour, whoops, i seem to have put
the page in backwards. hellooo, where's my network? oh dear, my
scaling's all wrong now that i'm using a black-and-white windows printer
not a colour one AAAGH!

[btw, the challenge was taken up by Babbage, with his Analytical Engine,
the theoretical successful completion of which and the historical
consequences is explored in 'The Difference Engine', by William Gibson
and Bruce Sterling. very interesting book]

So, at least in the US, one's secondary education covers some general
sociopolitical history.
Here's what people before us did, and here's how they were wrong,
or here's how someone later improved on their way of doing things.
It occurs to me that my undergrad education was missing computing
history after Turing. I got the sense from reading the "Worse is Better"
article that We've Been Here Before and I don't know about it.

So, what books should I read about that cover computing and language
history in interesting, readable detail? I'm not looking for scholarly
work
or pages and pages of dry detail; more like Neal Stephenson's
explanations of cryptography than like *Structure and Interpretation of
Computer Programs*.

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser
code is live. It needs further work but already handles most
markup better than the original parser.