"If you care a lot about the future, it shows that you believe in what you're doing now and you think it's worthwhile enough to have some lasting impact." – Syd Mead

Menu

Month: December 2008

College

I went to Colorado State University in 1988. As I went through college I forgot about my fantasies of computers changing society. I was focused on writing programs that were more sophisticated than I had ever written before, appreciating architectural features of software and emulating them in my own projects, and learning computer science theory.

At the time, I thought my best classes were some basic hardware class I took, Data Structures, Foundations of Computer Architecture, Linguistics (a non-CS course), a half-semester course on the C language, Programming Languages, and a graduate level course on compilers. Out of all of them the last two felt the most rewarding. I had the same professor for both. Maybe that wasn’t a coincidence.

In my second year I took a course called Comparative Programming Languages, where we surveyed Icon, Prolog, Lisp, and C. My professor for the class was a terrible teacher. There didn’t appear to be much point to the course besides exposing us to these languages. To make things interesting (for my professor, I think), he assigned problems that were inordinately hard when compared to my other CS courses. I got through Icon and C fine. Prolog gave me a few problems, but I was able to get the gist of it. I was taking the half-semester C course at the same time, which was fortunate for me. Otherwise, I doubt I would’ve gotten through my C assignments.

Lisp was the worst! I had never encountered a language I couldn’t tackle before, but Lisp confounded me. We got some supplemental material in class on it, but it wasn’t that good in terms of helping me relate to it. What made it even harder is our professor insisted we use it in the functional style: no set, setq, etc., or anything that used it was allowed. All loops had to be recursive. We had two assignments in Lisp and I didn’t complete either one. I felt utterly defeated by it and I vowed never to look at it again.

My C class was fun. Our teacher had a loose curriculum, and the focus was on just getting us familiar with the basics. In a few assignments he would say “experiment with this construct.” There was no hard goal in mind. He just wanted to see that we had used it in some way and had learned about it. I loved this! I came to like C’s elegance.

I took Programming Languages in my fourth year. My professor was great. He described a few different types of programming languages, and he discussed some runtime operating models. He described how functional languages worked. Lisp made more sense to me after that. We looked at Icon, SML, and Smalltalk, doing a couple assignments in each. He gave us a description of the Smalltalk system that stuck with me for years. He said that in its original implementation it wasn’t just a language. It literally was the operating system of the computer it ran on. It had a graphical interface, and the system could be modified while it was running. This was a real brain twister for me. How could the user modify it while it was running?? I had never seen such a thing. The thought of it intrigued me though. I wanted to know more about it, but couldn’t find any resources on it.

I fell in love with Smalltalk. It was my very first object-oriented language. We only got to use the language, not the system. We used GNU Smalltalk in its scripting mode. We’d edit our code in vi, and then run it through GNU Smalltalk on the command line. Any error messages or “transcript” output would go to the console.

I learned what I think I would call a “Smalltalk style” of programming, of creating object instances (nodes) that have references to each other, each doing very simple tasks, working cooperatively to accomplish a larger goal. I had the experience in one Smalltalk assignment of feeling like I was creating my own declarative programming language of sorts. Nowadays we’d say I had created a DSL (Domain-Specific Language). Just the experience of doing this was great! I had no idea programming could be this expressive.

I took compilers in my fifth year. Here, CS started to take on the feel of math. Compiler design was expressed mathematically. We used the red “Dragon book”, Compilers: Principles, Techniques, and Tools, by Aho, Sethi, and Ullman. The book impressed me right away with this introductory acknowledgement:

This book was phototypeset by the authors using the excellent software available on the UNIX system. The typesetting command read:

pic files | tbl | eqn | troff -ms

pic is Brian Kernighan’s language for typesetting figures; we owe Brian a special debt of gratitude for accomodating our special and extensive figure-drawing needs so cheerfully. tbl is Mike Lesk’s language for laying out tables. eqn is Brian Kernighan and Lorinda Cherry’s language for typesetting mathematics. troff is Joe Ossana’s program for formatting text for a phototypesetter, which in our case was a Mergenthaler Linotron 202/N. The ms package of troff macros was written by Mike Lesk. In addition, we managed the text using make due to Stu Feldman. Cross references within the text were maintained using awk created by Al Aho, Brian Kernighan, and Peter Weinberger [“awk” was named after the initials of Aho, Weinberger, and Kernighan — Mark], and sed created by Lee McMahon.

I thought this was really cool, because it felt like they were “eating their own dog food.”

We learned at the time about the concepts of bootstrapping, cross-compilers for system development, LR and LALR parsers, bottom-up and top-down parsers, parse trees, pattern recognizers (lexers), stack machines, etc.

For our semester project we had to implement a compiler for a Pascal-like language, and it had to be capable of handling recursion. Rather than generate assembly or machine code, we were allowed to generate C code, but it had to be generated as if it were 3-address code. We were allowed to use a couple C constructs, but by and large it had to read like an assembly program. A couple other rules were we had to build our own symbol table (in the compiler), and call stack (in the compiled program).

We worked on our projects in pairs. We were taught some basics about how to use lex and yacc, but we weren’t told the whole story… I and my partner ended up using yacc as a driver to our own parse-tree-building routines. We wrote all of our code in C. We made the thing so complicated. We invented stacks for various things, like handling order of operations for mathematical expressions. We went through all this trouble, and then one day I happened to chat with one of my other classmates and he told me, “Oh, you don’t have to do all that. Yacc will do that for you.” I was dumbfounded. How come nobody told us this before?? Oh well, it was too late. It was near the end of the semester, and we had to turn in test results. My memory is even though it was an ad hoc design, our compiler got 4 out of 5 tests correct. The 5th one, the one that did recursion, failed. Anyway, I did okay in the course, and that felt like an accomplishment.

I wanted to do it up right, so I took the time after I graduated to rewrite the compiler, fully using yacc’s abilities. At the time I didn’t have the necessary tools available on my Atari STe to do the project, so I used Nyx, a free, publicly available Unix system that I could access via. a modem through a straight serial connection (PPP hadn’t been invented yet). It was just like calling up a BBS except I had shell access.

I structured everything cleanly in the compiler, and I got the bugs worked out so it could handle recursion.

A more sophisticated perspective

Close to the time I graduated a mini-series came out on PBS called “The Machine That Changed The World.” What interested me about it was its focus on computer history. It filled in more of the story from the time when I had researched it in Jr. high and high school.

My favorite episode was “The Paperback Computer,” which focused on the research efforts that went into creating the personal computer, and the commercial products (primarily the Apple Macintosh) that came from them.

It gave me my first glimpse ever of the work done by Douglas Engelbart, though it only showed a small slice–the invention of the mouse. Mitch Kapor, one of the people interviewed for this episode, pointed out that most people had never heard of Engelbart, yet he is the most important figure in computing when you consider what we are using today. This episode also gave me my first glimpse of the research done at Xerox PARC on GUIs, though there was no mention of the Smalltalk system (even though that’s the graphics display you see in that segment).

I liked the history lessons and the artifacts it showed. The deeper ideas lost me. By the time I saw this series, I had already heard of the idea that the computer was a new medium. It was mentioned sometimes in computer magazines I read. I was unclear on what this really meant, though.

I had already experienced some aspects of this idea without realizing it, especially when I used 8-bit computers with Basic or Logo, which gave me a feeling of interactivity. The responsiveness towards the programmer was pretty good for the limited capabilities they had. It felt like a machine I could mold and change into anything I wanted via. programming. It was what I liked most about using a computer. Being unfamiliar though with the concept of what a medium really was, I thought when digital video and audio came along, and the predictions about digital TV through the “Information Superhighway,” that this was what it was all about. I had fallen into the mindset a lot of people had at the time: the computer was meant to automate old media.

A sense of history

I had never seen a machine before that I could change so easily (relative to other machines). I had this sense that these computers represented something big. I didn’t know what it was. It was just a general feeling I got from reading computer magazines. They reported on what was going on in the industry. All I knew was that I really liked dinking around with them, and I knew there were some others who did, too, but they were at most ten years older than me.

The culture of computing and programming was all over the place as well, though the computer was always portrayed as this “wonder machine” that had magical powers, and programming it was a mysterious dark art which made neat things happen after typing furiously tappy-tappy on the keyboard for ten seconds. Still, it was all encouragement to get involved.

I got a sense early on that it was something that divided the generations. Most adults I ran into knew hardly anything about them, much less how to program them, like I did. They had no interest in learning about them either. They felt they were too complicated, threatening, mysterious. They didn’t have much of an idea about its potential, just that “it’s the future”, and they encouraged me to learn more about them, because I was going to need that knowledge, though their idea of “learn more about them” meant “how to use one”, not program it. If I told them I knew how to program them they’d say, “Wow! You’re really on top of it then.” They were kind of amazed that a child could have such advanced knowledge that seemed so beyond their reach. They couldn’t imagine it.

A few adults, including my mom, asked me why I was so fascinated by computers. Why were they important? I would tell them about the creative process I went through. I’d get a vision in my mind of something I thought would be interesting, or useful to myself and others. I’d get excited enough about it to try to create it. The hard part was translating what I saw in my mind, which was already finished, into the computer’s language, to get it to reproduce what I wanted. When I was successful it was the biggest high for me. Seeing my vision play out in front of me on a computer screen was electrifying. It made my day. I characterized computers as “creation machines”. What I also liked about them is they weren’t messy. Creating with a computer wasn’t like writing or typing on paper, or painting, where if you made a mistake you had to live with it, work around it, or start over somewhere to get rid of the mistake. I could always fix my mistakes on a computer, leaving no trace behind. The difference was with paper it was always easy to find my mistakes. On the computer I had to figure out where the mistakes were!

The few adults I knew at the time who knew how to use a computer tended to not be so impressed with programming ability, not because they knew better, but because they were satisfied being users. They couldn’t imagine needing to program the computer to do anything. Anything they needed to do could be satisfied by a commercial package they could buy at a computer store. They regarded programming as an interesting hobby of kids like myself, but irrelevant.

As I became familiar with the wider world of what represented computing at the time (primarily companies and the computers they sold), I got a sense that this creation was historic. Sometimes in school I would get an open-ended research assignment. Each chance I got I’d do a paper on the history of the computer, each time trying to deepen my knowledge of it.

The earliest computer I’d found in my research materials was a mechanical adding machine that Blaise Pascal created in 1645, called the Pascaline. There used to be a modern equivalent of it as late as 25-30 years ago that you could buy cheap at the store. It was shaped like a ruler, and it contained a set of dials you could stick a pencil or pen point into. All dials started out displaying zeros. Each dial represented a power of ten (starting at 100). If you wanted to add 5 + 5, you would “dial” 5 in the ones place, and then “dial” 5 in the same place again. Each “dial” action added to the quantity in each place. The dials had some hidden pegs in them to handle carries. So when you did this, the ones place dial would return to “0”, and the tens place dial would automatically shift to “1”, producing the result “10”.

The research material I was able to find at the time only went up to the late 1960s. They gave me the impression that there was a dividing line. Up to about 1950, all they talked about were the research efforts to create mechanical, electric, and finally electronic computers. The exception being Herman Hollerith, who I think was the first to find a commercial application for an electric computer, to calculate the 1890 census, and who built the company that would ultimately come to be known as IBM. The last computers they talked about being created by research institutions were the Harvard Mark I, ENIAC, and EDSAC. By the 1950s (in the timeline) the research material veered off from scientific/research efforts, for the most part, and instead talked about commercial machines that were produced by Remington Rand (the Univac), and IBM. Probably the last thing they talked about were minicomputer models from the 1960s. The research I did matched well with the ethos of computing at the time: it’s all about the hardware.

High school

When I got into high school I joined a computer club there. Every year club members participated in the American Computer Science League (ACSL). We had study materials that focused on computer science topics. From time to time we had programming problems to solve, and written quizzes. We earned individual scores for these things, as well as a combined score for the team.

We would have a week to come up with our programming solutions on paper (computer use wasn’t allowed). On the testing day we would have an hour to type in our programs, test and debug them, at the end of which the computer teacher would come around and administer the official test for a score.

What was neat was we could use whatever programming language we wanted. A couple of the students had parents who worked at the University of Colorado, and had access to Unix systems. They wrote their programs in C on the university’s computers. Most of us wrote ours in Basic on the Apples we had at school.

Just as an aside, I was alarmed to read Imran’s article in 2007 about using FizzBuzz to interview programmers, because the problem he posed was so simple, yet he said “most computer science graduates can’t [solve it in a couple minutes]”. It reminded me of an ACSL programming problem we had to solve called “Buzz”. I wrote my solution in Basic, just using iteration. Here is the algorithm we had to implement:

input 5 numbers
if any input number contains the digit "9", print "Buzz"
if any input number is evenly divisible by 8, print "Buzz"
if the sum of the digits of any input number is divisible by 4,
print "Buzz"
for every input number
A: sum its digits (we'll use "sum" as a variable in this
loop for a sum of digits)
if sum >= 10
get the digits for sum and go back to Step A
if sum equals 7
print "Buzz"
if sum equals any digit in the original input number
print "Buzz"
end loop
test input output
198 Buzz Buzz
36
144 Buzz
88 Buzz Buzz Buzz
10 Buzz

In my junior and senior year, based on our regional scores, we qualified to go to the national finals. The first year we went we bombed, scoring in last place. Ironically we got kudos for this. A complimentary blurb in the local paper was written about us. We got a congratulatory letter from the superintendent. We got awards at a general awards assembly (others got awards for other accomplishments, too). A bit much. I think this was all because it was the first time our club had been invited to the nationals. The next year we went we scored in the middle of the pack, and heard not a peep from anybody!

(Update 1-18-2010: I’ve had some other recollections about this time period, and I’ve included them in the following seven paragraphs.)

By this point I was starting to feel like an “experienced programmer”. I felt very comfortable doing it, though the ACSL challenges were at times too much for me.

I talk about a couple of software programs I wrote below. You can see video of them in operation here.

I started to have the desire to write programs in a more sophisticated way. I began to see that there were routines that I would write over and over again for different projects, and I wished there was a way for me to write code in what would now be called “components” or DLLs, so that it could be generalized and reused. I also wanted to be able to write programs where the parts were isolated from each other, so that I could make major revisions without having to change parts of the program which should not be concerned with how the part I changed was implemented.

I even started thinking of things in terms of systems a little. A project I had been working on since Jr. high was a set of programs I used to write, edit, and play Mad Libs on a computer. I noticed that it had a major emphasis on text, and I wondered if there was a way to generalize some of the code I had written for it into a “text system”, so that people could not only play this specific game, but they could also do other things with text. I didn’t have the slightest idea how to do that, but the idea was there.

By my senior year of high school I had started work on my biggest project yet, an app. I had been wanting for a while, called “Week-In-Advance”. It was a weekly scheduler, but I wanted it to be user friendly, with a windowing interface. I was inspired by an Apple Lisa demo I had seen a couple years earlier. I spent months working on it. The code base was getting so big, I realized I had to break it up into subroutines, rather than one big long program, to make it manageable. I wrote it in Basic, back when all it had for branching was Goto, Gosub, and Return commands. I used Gosub and Return to implement the subroutines.

I learned some advanced techniques in this project. One feature I spent a lot of time on was how to create expanding and closing windows on the screen. I tried a bunch of different animation techniques. Most of them were too slow to be useable. I finally figured out one day that a box on the screen was really just a set of four lines, two vertical, and two horizontal, and that I could make it expand efficiently by just taking the useable area of the screen, and dividing it horizontally and vertically by the number of times I would allow the window to expand, until it reached its full size. The horizontal lines were bounded by the vertical lines, and the vertical lines were bounded by the horizontals. It worked beautifully. I had generalized it enough so that I could close a window the same way, with some animated shrinking boxes. I just ran the “window expanding” routine in reverse by negating the parameters. This showed me the power of “systematizing” something, rather than using code written in a narrow-minded fashion to make one thing happen, without considering how else the code might be used.

The experience I had with using subroutines felt good. It kept my code under control. Soon after I wanted to learn Pascal, so I devoted time to doing that. It had procedures and functions as built-in constructs. It felt great to use them compared to my experience with Basic. The Basic I used had no scoping rules whatsoever. Pascal had them, and programming felt so much more manageable.

Getting a sense of the future

As I worked with computers more and talked to people about them in my teen years I got a wider sense that they were going to change our society, I thought in beneficial ways. I’ve tried to remember why I thought this. I remember my mom told me this in one of the conversations I had with her about them. Besides this, I think I was influenced by futurist literature, artwork, and science fiction. I had this vague idea that somehow in the future computers would help us understand our world better, and to become better thinkers. We would be smarter, better educated. We would be a better society for it. I had no idea how this would happen. I had a magical image of the whole thing that was deterministic and technology-centered.

It was a foregone conclusion that I would go to college. Both my mom and my maternal grandparents (the only ones I knew) wanted me to do it. My grandparents even offered to pay full expenses so long as I went to a liberal arts college.

In my senior year I was trying to decide what my major would be. I knew I wanted to get into a career that involved computer programming. I tried looking at my past experience, what kinds of projects I liked to work on. My interest seemed to be in application programming. I figured I was more business-oriented, not a hacker. The first major I researched was CIS (Computer Information Science/Systems). I looked at their curriculum and was uninspired. I felt uncertain about what to do. I had heard a little about the computer science curriculum at a few in-state universities, but it felt too theoretical for my taste. Finally, I consulted with my high school’s computer teacher, and we had a heart to heart talk. She (yes, the computer teacher was a woman) had known me all through my time there, because she ran the computer club. She advised me to go into computer science, because I would learn about how computers worked, and this would be valuable in my chosen career.

She had a computer science degree that she earned in the 1960s. She had an interesting life story. I remember she said she raised her family on a farm. At some point, I don’t know if it was before or after she had kids, she went to college, and was the only woman in the whole CS program. She told me stories about that time. She experienced blatant sexism. Male students would come up to her and say, “What are you doing here? Women don’t take computer science,” and, “You don’t belong here.” She told me about writing programs on punch cards (rubber bands were her friends 🙂 ), taking a program in to be run on the school’s mainframe, and waiting until 3am for it to run and to get her printout. I couldn’t imagine it.

I took her advice that I should take CS, kind of on faith that she was right.

This series of posts has been brewing within me for more than a year, though for some reason it just never felt like the right time to talk about it. Two months ago I found The Machine That Changed The World online, a mini-series I had seen about 15 years ago. I was going to just write about it here, but then all this other stuff came out of me, and it became about that, though I’ve included it in this series.

I go through things chronologically, though I only reference a few dates, so one subject will tend to abruptly transition into another, just because of what I was focused on at a point in time. I call this “my journey”.

Getting introduced

From when I was 6 or so, in the mid-1970s, I had had a couple experiences using computing devices, and a couple brief chances to use general purpose computers. My interest in computers began in earnest in 1981. I was 11 going on 12. My mother and I had moved to a new town in Colorado a year earlier. One day we both went by the local public library and I saw a man sitting in front of an Atari 400 computer that was set up with a Sony Trinitron TV set. I saw him switch between a blue screen with cryptic code on it, and a low-rez graphics screen that had what looked like a fence and some blocky “horses” set up on a starting line. And then I saw the horses multiply across the screen, fast. The colorful graphics really caught my eye. I was already used to the blocky graphics of the Atari 2600 VCS. I had seen those around in TV ads and stores. I sat and watched the man work on his project. Each time he ran the program the same thing happened. After watching this for a while I got the impression that he was trying to write a horse racing game, but that something was wrong with it.

My mother noticed my interest and tried to get me to ask the librarian about the computer, to see if I could use it. I was reticent at first. I assumed that only “special” people could use it. I figured the man worked at the library or something, or that only adults could use it. I asked the librarian. She asked my age. They had a minimum age requirement of ten years old. She said all I needed to do was sign up for an orientation that lasted for 15 minutes, and then sign up for computer time when it was available. So at my mother’s urging I signed up for the orientation. Several days later I went to orientation with about five other people of different ages (children and adults), and got a brief once-over about how to operate the Atari, how to sign up for time, and what software they had available for it behind the desk.

I was interested right off in learning to do what I saw that man do: program the computer. My first stab at this was running a tutorial called An Invitation to Programming that came on 3 cassette tapes, double-sided. The tape drive I put the tapes into looked like an ordinary tape recorder, except that it had a cable running right to the computer. I thought the tutorial was the neatest thing. At first I think I was distracted by how it worked and paid hardly any attention to what it was attempting to teach. The fascinating thing about it was the first part of each tape had a voice track on it that would play over the TV’s speaker while the tutorial loaded. This was really clever. Rather than sitting there waiting for the program to load, I could listen to a male narrator talk about what I was going to learn with some flashy music playing in the background. By the time he was done, the tutorial ran. Once it started running the voice track started up again, this time with a female narrator. What appeared on the screen was coordinated with the audio track. When the tutorial would stop to give me a quiz, the tape drive would stop. When I was ready to continue, the tape drive started up again. “How is it doing this,” I wondered with awe.

Introduction to “An Invitation to Programming” by Atari

Part 4 of “An Invitation to Programming”

By the way, this is not me using the tutorial. I found these videos on YouTube.

The tutorial was about the Basic programming language. I went through the whole thing two or three times, in multiple sessions, because I could tell as I got towards the more advanced stuff that I wasn’t getting it. I only understood the simple things, like how to print on the screen, and how to use the Goto command. Other stuff like DIMming variables, getting input, and forming loops gave me a lot of trouble. They had the manual for Atari BASIC behind the desk and I tried reading that. It really made my head hurt, but I came to understand more.

Eventually I found out there was an Atari 800 in another part of the library that I could sign up for, so I’d sometimes use that one. There seemed to be more people who would hang around it. I found a couple people who were more knowledgeable about Basic and I’d ask them to help me, which they did.

I fell into a LOT of pitfalls. I couldn’t write a complete program that worked, though I kept trying. Every time I’d encounter an error I’d guess at what the problem was, and I would try changing something at random. I had no context. I felt lost. I needed a lot of help from others in understanding different contexts in my program.

It took me a month or two before I had written my first program that went beyond “hello world” stuff. It was a math quiz. Over several more months I stumbled a lot on Basic but kept learning. I remember I used to get SO frustrated! Time and again I would think I knew what I was doing, but something would go wrong. Debugging felt so hard. I’d go home fuming. I had to practice relaxing.

As time passed a strange thing started happening. I’d go through my “frustration cycle,” focus on something else for the day, have dinner with my mom and talk about it, watch TV, do my homework, or something. I’d go to bed thinking that this problem I was obsessed with was insurmountable. I’d wake up the next morning, and the answer would just come to me, like it was the most obvious thing in the world. I couldn’t try it out right away, because I had to get to school, but all day I’d feel anxious to try out my solution. Finally, I’d get my chance, and it would work! Wow! What a feeling! This process totally mystified me. How was it that at one point I could be dealing with a problem that felt intractible, and then at another time, when I had the opportunity to really relax, the problem was as easy to solve as brushing my teeth? I have heard recently that the brain uses sleep time to “organize” stuff that’s been absorbed while awake. Maybe that’s it.

When I entered Jr. high school I eventually discovered that they had some computer magazines in their library. A few of them had program listings. One of them was a magazine called Compute!. I fell in love with it almost immediately. The first thing that drew me in was the cover art. It looked fun! Secondly, the content was approachable. It had complete listings for games. I went through their stack of Compute issues with a passion. I was checking them out often, taking them to the public library to type in to the Atari, debugging typing mistakes, and having fun seeing the programs come to life.

We had two Apple II’s at my school, one in the library that I could sign up for, and one that a math teacher had requested for his office. In my first year there he started a computer club. I signed up for the club immediately. Each Friday after school we would get together. There were about four of us, plus the math teacher, huddled around his computer. He would teach us about programming in Basic, or have us try to solve some programming problem.

Along the way I got my own ideas for projects to do. I wrote a few of my own programs on the Atari at the library. By my eighth grade year my school installed a computer lab filled with Apple II’s, and they offered computer use and programming classes. I signed up for both. In the programming course we covered Basic and Logo.

Logo was the first language I worked with where programming felt easy. Quite frankly the language felt well designed, too, though my memory is that it didn’t get much respect from the other programmers my age. They saw it as a child’s language–too immature for us teenagers. Then again, they thought “real programming” was making the computer do cool stuff in hexadecimal. Every construct we used in Logo was relevant to the platform we were using. Unfortunately all we learned about with Logo was procedural programming. We didn’t learn what it was intended for: to teach children math. After this course I got some more project ideas which I finished successfully after a lot of work.

In the computer use course we learned about a couple word processors (Apple Writer, and Bank Street Writer), Visicalc, and a simple database program whose name I can’t remember. I took it because I thought it would teach useful skills. I had no idea how to use any of these tools before then.