The 12th annual ACM SIGPLANConference on Object-OrientedProgramming Systems,Languages, and Applications

October 5-9, 1997Atlanta, Georgia"------------

[Alan C. Kay] Thank you. Well, I presume most of you have been up all night. I can't imagine ever seeing this many programmers at eight o'clock in the morning. [Laughter] I guess this is the largest bathroom I've ever given a talk in. [Smiles] [Laughter] That was just a test to see if you could understand me. I can't actually understand myself up here. I actually haven't been to OOPSLA since the first one and, when I got invited to give this talk, while I was thinking—well, should I go, or should I not, or what should I do, it occured to me that this conference, on this day, is pretty much in the epicenter of the twenty-fifth anniversary of Smalltalk. The one page interpreter scheme that I wrote out was done just a few weeks ago twenty-five years ago. [Applause] The first working version of it was done in a few weeks from now, twenty-five years ago, so this is about in the center and—let me see if I can get our motto up on—could I have that first slide, please?

I'm not gonna give a historical talk, as I finally discharged those obligations in the History of Programming Languages Conference a couple of years ago, but I thought it might be interesting for some of you, who might not have been computing for the last twenty-five or thirty years, to take a two-minute trip. I believe that this set of images basically goes back to 1973 and 1974 at Xerox PARC—[they] show some of the first children that we worked with. The music that you'll hear on this clip was composed by one of the members of our group, Chris Jeffers. It's called "The Happy Hacker", in case you want a theme song. It's played in real time FM synthesis that we developed for the Alto computer. This is the forerunner of the workstations and the Macintosh without any additional sound synthesis hardware at all, because why should you have that if your computer is designed well. I think you'll get a little picture of it, but before I turn on that clip, let's just for the heck of it see how many people are in this room today, who participated in the Xerox PARC Smalltalk experience, between roughly 1971 and 1983. Could you stand up? Let's see if we—how many people are actually here. Anybody without gray hair? [Smiles] [Laughter] Thank you. Okay, let's roll that, that clip.

Well, that was things as they existed about twenty-five years ago. The other thing I wanted to do in the beginning part of this talk—I tried to figure out how to work my way into it, and I finally remembered a paper that Dijkstra—I don't know how many of you have ever met Dijkstra, but you probably know that arrogance in computer science is measured in nano Dijkstras. [Laughter] He once wrote a paper—of the kind that he liked to write a lot of—which had the title OnthefactthattheAtlantichastwosides. It was basically all about how different the approaches to computing science were in Europe, especially in Holland and in the United States. In the US, here, we were not mathematical enough, and gee, in Holland, if you're a full professor, you're actually appointed by the Queen, and there are many other uh important distinctions made between the two cultures. So, uhm, I wrote a rebuttal paper, just called OnthefactthatmostofthesoftwareintheworldiswrittenononesideoftheAtlantic. [Laughter] It was basically about that, computers form a new kind of math. You can't judge them. They don't really fit well in classical math, and people who tried to do that are basically indulging in a form of masturbation.—Maybe even realizing it. [Laughter] It was kind of a practical math, that was. The balance was between making structures that were supposed to be consistent of a much larger kind than classical math had ever come close to dreaming of attempting, and having to deal with the exact same problems that classical math of any size has to deal with, which is being able to be convincing about having covered all of the cases.

There's a mathematician by the name of Euler, whose speculations about what might be true formed twenty large books, and most of them were true. Most of them were right. Almost all of his proofs were wrong, and many PhDs in mathematics in the last and this century have been formed by mathematicians going to Euler's books, finding one of his proofs, showing it was a bad proof, and then guessing that his insight was probably correct, and finding a much more convincing proof. So debugging actually goes on in mathematics as well.

I think the main thing about doing OOP work, or any kind of programming work, is that there has to be some exquisite blend between beauty and practicality. There's no reason to sacrifice either one of those, and people who are willing to sacrifice either one of those, I don't think really get what computing is all about. It's like saying I have really great ideas for paintings, but I'm just gonna use a brush but no paint. So my ideas will be represented by the gestures I make over the paper—and don't tell any 20th century artist that, or they might decide to make a videotape of them doing that and put it in a museum. [Laughter]

I had this problem figuring out what to talk about. It's always difficult because technical people always seem to know so much, but it's interesting, again, to look at what is actually being done in the world under the name of OOP. I've been shown some very, very strange-looking pieces of code over the years by various people, including people in universities, that they have said is OOP code, and written in an OOP language—and actually, I made up the term object-oriented, and I can tell you I did not have C++ in mind. [Laughter and applause] An important thing here is—I have many of the same feelings about Smalltalk—and I'm not going to try and do Smalltalk in here, because I think there is one really important thing about Smalltalk, and some of the languages like it that we should pay really, really close attention to—but it has almost nothing to do with either the syntax or the accumulated superclass library. Both of these are taken as being the language, as though it was issued from some gods on Olympus. So I want to talk a little bit more about my personal reaction to OOP when I started thinking about it in the sixties, and instead of making it a historical talk, to try and think about whether these reactions and insights have any place today.

In the sixties things were quite mechanical. There's a sense of simple mechanism because computers were as large as this room. The one that Ivan Sutherland did Sketchpad on was the size of this room. It was one of the last computers in the US large enough to have its own roof. It was the building it was in. But the programs were quite small and they had a lot in common with their mathematical antecedents. One way of thinking about the semantics of math that was based on logic, is as interlocking gears. Everything kind of has to fit together, and if it does, and everything is compatible at the end, you get the final turning of the shaft that you want.

An analogy to these programs of the sixties is a dog house. If you take any random boards, nail, and hammer; pound them together and you've got a structure that will stay up. You don't have to know anything, except how to pound a nail to do that. Now, somebody could come along and look at this dog house and say, Wow! If we could just expand that by a factor of a hundred we could make ourselves a cathedral. It's about three feet high. That would give us something thirty stories high, and that would be really impressive. We could get a lot of people in there. The carpenters would set to work blowing this thing up by a factor of a hundred. Now, we all know, being engineers and scientists, that when you blow something up by a factor of a hundred, its mass goes up by a factor of a million, and its strength, which is mostly due to cross sections of things, only goes up by a factor of ten thousand. When you blow something up [by] a factor of a hundred, it gets by a factor of hundred weaker in its ability, and in fact, what will happen to this dog house; it would just collapse into a pile of rubble. Then there are two choices you can have when that happens. The most popular one is to say, Well, that was what we were trying to do all along. [Laughter] Put more garbage on it, plaster it over with limestone, and say, Yes, we were really trying to do pyramids, not gothic cathedrals. That, in fact accounts for much of the structure of modern operating systems today. [Laughter and applause]

Or, you can come up with a new concept, which the people who started getting interested in complex structures many years ago did. They called it architecture. Literally, the designing and building of successful arches. A non-obvious, a non-linear interaction between simple materials to give you non-obvious sinergies, and a fast multiplication of materials. It's quite remarkable to people when I tell them that the amount of material in [unintelligible] cathedral, which is an enormous, physical structure, is less than the amount of material that was put into the Parthenon. The reason is that it's almost all air, and almost all glass. Everything is cunningly organized in a beautiful structure to make the whole [hole/whole] have much more integrity than any of its parts. That's the other way you can go, and part of the message of OOP was, that, as complexity starts becoming more and more important, architecture's always going to dominate material, and in fact, the sad fact, I think, about OOP, is [that] people didn't get interested in architecture because of the beauty of it. They're only starting to get interested in architecture now, when the Internet is forcing everybody to do it. That's pretty pathetic.

I'm going to use a metaphor for this talk which is drawn from a wonderful book called TheActofCreation by Arthur Koestler. Koestler was a novelist who became a cognitive scientist in his later years. One of the great books he wrote was about what might creativity be.—Learning.—He realized that learning, of course, is an act of creation itself, because something happens in you that wasn't there before. He used a metaphor of thoughts as ants crawling on a plane. In this case it's a pink plane, and there's a lot of things you can do on a pink plane. You can have goals. You can choose directions. You can move along. But you're basically in the pink context. It means that progress, in a fixed context, is almost always a form of optimization, because if you're actually coming up with something new, it wouldn't have been part of the rules or the context for what the pink plane is all about. Creative acts, generally, are ones that don't stay in the same context that they're in. He says, every once in a while, even though you have been taught carefully by parents and by school for many years, you have a blue idea. Maybe when you're taking a shower. Maybe when you're out jogging. Maybe when you're resting in an unguarded moment, suddenly, that thing that you were puzzling about, wondering about, looking at, appears to you in a completely different light, as though it were something else.

Koestler said that the emotional reaction to this comes basically in three forms. If you're telling a joke, it's HA HA! If you're doing science, it's A HA!, and if you're doing art it's AHHH! He says, because in each case, something very similar is happening. A joke takes you down the garden path, and suddenly reveals it's about something else, and you get a very aggressive explosion. Science has much of the same feeling to it, and often, when you see something in science, you start laughing because it was right there in front you, and it is a kind of a joke. Art is there to remind us—Great art is there to remind us that whatever context we think we're in, there are other contexts. Art is always there to take us out of the context that we are in and make us aware of other contexts. This is a very simple—you can even call it a simple minded metaphor—but it will certainly serve for this talk today. He also pointed out that you have to have something blue to have blue thoughts with. I think this is generally missed in people who specialize to the extent of anything else. When you specialize, you are basically putting yourself into a mental state where optimization is pretty much all you can do. You have to learn lots of different kinds of things in order to have the start of these other contexts.

So here's a couple of knocks on the head I had over the years. I just want to tell them to you quickly. This one I think you'll find interesting because it is the earliest known form of what we call data abstraction. I was in the Air Force in 1961, and I saw it in 1961, and it probably goes back one year before. Back then, they really didn't have operating systems. Air training command had to send tapes of many kinds of records around from Air Force base to Air Force base. There was a question on how can you deal with all of these things that used to be card images, because tape had come in, [there] were starting to be more and more complicated formats, and somebody—almost certainly an enlisted man, because officers didn't program back then—came up with the following idea. This person said, on the third part of the record on this tape we'll put all of the records of this particular type. On the second part—the middle part—we'll put all of the procedures that know how to deal with the formats on this third part of the tape. In the first part we'll put pointers into the procedures, and in fact, let's make the first ten or so pointers standard, like reading and writing fields, and trying to print; let's have a standard vocabulary for the first ten of these, and then we can have idiosyncratic ones later on. All you had to do [to] read a tape back in 1961, was to read the front part of a record—one of these big records—into core storage, and start jumping indirect through the pointers, and the procedures were there.

I really would like you to contrast that with what you have to do with HTML on the Internet. Think about it. HTML on the Internet has gone back to the dark ages because it presupposes that there should be a browser that should understand its formats. This has to be one of the worst ideas since MS-DOS. [Laughter] This is really a shame. It's maybe what happens when physicists decide to play with computers, I'm not sure. [Laughter] In fact, we can see what's happend to the Internet now, is that it is gradually getting—There are two wars going on. There's a set of browser wars which are 100 percent irrelevant. They're basically an attempt, either at demonstrating a non-understanding of how to build complex systems, or an even cruder attempt simply to gather territory. I suspect Microsoft is in the latter camp here. You don't need a browser, if you followed what this Staff Sergeant in the Air Force knew how to do in 1961. You just read it in. It should travel with all the things that it needs, and you don't need anything more complex than something like X Windows. Hopefully better. But basically, you want to be able to distribute all of the knowledge of all the things that are there, and in fact, the Internet is starting to move in that direction as people discover ever more complex HTML formats, ever more intractable. This is one of these mistakes that has been recapitulated every generation. It's just simply not the way to do it.

So here's a great idea—and by the way, this kind of programming was done before there were higher level languages in the Air Force. This approach to things was forced out of the Air Force when they standardized on COBOL.—Ivan Sutherland's Sketchpad. I've usually shown a movie of what it was like. I won't today. Immensely sophisticated. Almost staggering in its conception on what it was able to do. Very much an object-oriented system. It had an actual notion of classes and sub-classes. It had a notion of polymorphism, even stronger than the air training command version. Next slide, please.

I had seen the idea three or four times, but it wasn't till I had to figure out Simula—we thought it was supposed to be an ALGOL. It turned out this pile of tapes was the first Simula, and ALGOL that had been doctored by Case Western Reserve, and the inventors of Simula, Nygaard and Dahl in Norway, and distributed along with some incomprehensible documentation in 1966.—It was through trying to understand what Simula was that, finally—I'm not sure exactly why, it's just—I think it's maybe if you see a good idea that's odd, four times, in four different costumes, it finally starts to make an impression, and here's the choice you have when you're faced with something new.

You can take this technological advance and you could decide this is a better way of doing the stuff I'm doing now, and I can use this to continue on the path that I'm going. That's staying in the pink plane. Or, you can say this is not a better old thing, this is almost a new thing, and I wonder what that new thing is trying to be. If you do that, there's a chance of actually, perhaps gaining some incredible leverage over simply optimizing something that can't be optimized very much. Simula came out of the world of data structures and procedures, and had much of that flavour, if you wanted to look at it that way. But it had a way of making relationships of the states of your computation with procedures, that was extremely helpful, and much better, and more general than what were called own variables in ALGOL 60. That was one way to think of it. Then there's this other question; if it was almost a new thing, what kind of a new thing was it? Well, one of my undergraduate majors was in molecular biology, and my particular interest was both in cell physiology and in embryology. Morphogenesis they call it today.

This book, MolecularBiologyoftheGene, had just come out in 1965. Wonderful book. Still in print. It has gone through many, many editions. The only words that are common between this book and the one today are the articles, like "the" and "an". Actually, the word gene, I think, is still in there, but it means something completely different now. But one of the things that Watson did in this book was to make an assay. [Pun on essay] The first assay of an entire living creature. That was the E. coli bacterium. Next slide, please.

If you look inside one of these the complexity is staggering. Those popcorn things are protein molecules that have about five thousand atoms in them. As you can see on the slide, when you get rid of the small molecules, like water and calcium ions, and potassium ions, and so forth, which constitute about 70 percent of the mass of this thing, the 30 percent that remains has about a 120 million components that interact with each other in an informational way. Each one of these components carries quite a bit of information. The simple-minded way of thinking of these things is that it works kind of like OOPS 5. There is a pattern matcher, and then there are things that happen if patterns are matched successfully. The state that's involved in that is about a hundred gigs; and you can multiply that out today. It's only a hundred desktops or so, but it's still pretty impressive as an amount of computation. Maybe the most interesting thing about this structure is that the repetity of computation seriously rivals that of computers today. Particularly when you are considering it's done in parallell. For example, one of those popcorn-sized things moves its own length in just two nanoseconds. One way of visualizing that is, if an atom was the size of a tennis ball, then one of these protein molecules would be about the size of a Volkswagen, and it's moving its own length in two nanoseconds. That's about eight feet on our scale of things.—Can anybody do the arithmetic to tell me uh what fraction of the speed of light, moving eight feet in two nanoseconds is? Four times? Yeah. Four times the speed of light.—Scale.—If you've ever wondered why chemistry works, this is why. The thermal agitation down there is so unbelievably violent that we could not imagine it, even with the aid of computers. There's nothing to be seen inside one of these things until you kill it, because it's just a complete blur of activity, and under good conditions, it only takes about fifteen to eighteen minutes for one of these to completely duplicate itself. Okay, so that's a bacterium. Lots more is known today. Another fact to relate this to us, is that these bacteria are about one five hundreth the size of the cells in our bodies, which instead of 120 million informational components have about sixty billion. We have between ten to the twelvth, and ten to the thirteenth, maybe even more of these cells in our body, and yet only fifty cell divisions happen in a nine month pregnancy.—It only takes fifty cell divisions to make a baby.—Actually, if you multiply it out, you realize you only need around fourty, and the extra ten powers of ten are there because, during the embryological process, many of the cells that are not fit in one way or another for the organism as a whole are killed. Things are done by overproliferating, testing, and trimming to this much larger plan. Each one of these structures, us, is embedded in an enormous biomass. So, to a person whose blue context might have been biology, something like a computer could not possibly be regarded as being particularly complex, or large, or fast.

Slow. Small. Stupid. That's what computers are. So the question is, how can we get them to realize their destiny. Next slide, please. [Waiting for slide to appear.] We're using a form of technology that Napoléon used. Do you remember those [unintelligble] across France?

The shift in point of view here is from mechanical.—This is a problem. If you take things like dog houses, they don't scale by a factor of a hundred very well. If you take things like clocks, they don't scale by a factor of a hundred very well. Take things like cells, they not only scale by factors of a hundred, but by factors of a trillion, and the question is, how do they do it, and how might we adapt this idea for building complex systems.—Okay, this is the simple one. This is the one, by the way, that C++ has still not figured out, though. [Laughter] There is no idea so simple and powerful that you can't get zillions of people to misunderstand it.

You must, must not let the interior of any one of these things to be a factor in the computation of the whole. Okay. This is only part of the story. The cell membrane is there to keep most things out, as much at it is there to keep certain things in. I think our confusion with objects is the problem that in our Western culture, we have a language that has very hard nouns and verbs in it. Our process words stink. It's much easier for us when we think of an object—and I have apologized profusely over the last twenty years for making up the term object-oriented, because as soon as it started to be misapplied, I realized that I should have used a much more process-oriented term for it.—The Japanese have an interesting word, which is called ma. Spelled in English, just ma. Ma is the stuff in-between what we call objects. It's the stuff we don't see, because we're focused on the nounness of things rather than the processness of things. Japanese has a more process-feel oriented way of looking at how things relate to each other. You can always tell that by looking at the size of [the] word it takes to express something that is important. Ma is very short. We have to use words like interstitial or worse to approximate what the Japanese are talking about.

The realization here—and it's not possible to assign this realization to any particular person because it was in the seeds of Sketchpad, and in the seeds of [the] air training command file system, and in the seeds of Simula. That is, that once you have encapsulated, in such a way that there is an interface between the inside and the outside, it is possible to make an object act like anything.

The reason is simply this, that what you have encapsulated is a computer. You have done a powerful thing in computer science, which is to take the powerful thing you're working on, and not lose it by partitioning up your design space. This is the bug in data and procedure languages. I think this is the most pernicious thing about languages a lot like C++ and Java, is that they think they're helping the programmer by looking as much like the old thing as possible, but in fact they are hurting the programmer terribly by making it difficult for the programmer to understand what's really powerful about this new metaphor. People who were doing time-sharing systems had already figured this out as well. Butler Lampson's thesis in 1965 was about what you want to give a person on a time-sharing system, is something that is now called a virtual machine, which is not the same as what the Java VM is, but something that is as much like the physical computer as possible, but give one separately to everybody. UNIX had that sense about it. The biggest problem with that scheme is that a UNIX process had an overhead of about two thousand bytes just to have a process, and so it was going to be very difficult in UNIX to let a UNIX process just be the number three. You would be going from three bits to a couple of thousand bytes, and you have this problem with scaling. A lot of the problem here is both deciding that the biological metaphor is the one that is going to win out over the next twenty-five years or so, and then committing to it enough to get it so it can be practical at all the levels of scale we actually need. Then we have one trick we can do that biology doesn't know how to do; which is, we can take the DNA out of the cells, and that allows us to deal with cystic fibrosis much more easily than the way it is done today. Systems do have cystic fibrosis.—Some of you may know [that] cystic fibrosis today, for some people, is treated by infecting them with a virus. A modified cold virus giving them a lung infection, but the defective gene for cystic fibrosis is in this cold virus. The cold virus is too weak to actually destroy the lungs, like pneumonia does, but it is strong enough to insert a copy of that gene in every cell in the lungs. That is what does the trick. That's a very complicated way of re-programming an organism's DNA, once it has gotten started.

Here's one that it is amazing to me that we haven't seen more of. For instance, one of the more amazing things to me, of people who have been trying to put OOP on the Internet, is that I do not—and I am hoping someone will come up afterwards and tell me of an exception to this—but I do not know of anybody yet, who has realized that, at the very least, every object should have a URL, because, what the heck are they if they aren't these things, and I believe that every object on the Internet should have an IP [address], because that represents, much better, what the actual abstractions are of physical hardware to the bits. So this is an early insight that objects basically are like servers. This notion of polymorphism, which used to be called generic procedures is a way of thinking about classes of these servers. Everybody knows about that.

Here's one we haven't faced up to much yet, that, now we have to construct this stuff and soon we'll be required to grow it. It's very easy, for instance, to grow a baby six inches. They do it about ten times in their life and you never have to take it down for maintenance. But if you try and grow a 747, you are faced with an unbelievable problem, because it's in this simple-minded mechanical world, in which the only object has been to make the artifact in the first place. Not to fix it. Not to change it. Not to let it live for a hundred years.

So let me ask a question. I won't take names. But how many people here still use a language that essentially forces you—and the development system forces you to develop outside of the language; compile and reload, and go, even if it's fast, like Virtual Café.—How many here still do that? Let's just see. Come on. Admit it! We can have a Texas tent beating later. [Smiles] [Laughter] That cannot possibly be other than a dead end for building complex systems, where much of the building of complex systems is in part going to go into trying to understand what the possibilities for interoperability is with things that already exist.

I just played a very minor part in the design of the ARPANET. I was one of thirty graduate students who went to systems design meetings to try and formulate design principles for the ARPANET, also about thirty years ago. The ARPNANET, of course, became the Internet, and from the time it started running—just around 1969 or so—to this day, it has expanded by about a factor of a hundred million. That's pretty good. Eight orders of magnitude. I talked to Larry Roberts about this the other day. There is not one physical atom in the Internet today, that was in the original ARPANET. There is not one line of code in the Internet today that was in the original ARPANET. Of course, if we had IBM main frames in the original ARPANET, that wouldn't have been true. This is a system that has expanded by a hundred million, has changed every atom and every bit, and has never had to stop. That is the metaphor we absolutely must apply to what we think are smaller things.

When we think programming is small, that's why your programs are so big. That's why they become pyramids instead of gothic cathedrals. Next. [Slide]

Here's the other big source. Certainly the greatest, single language along with Simula of the sixties, I think. One with as many profound or more profound insights—LISP. On page thirteen of this book that was published in 1962, there's a half page of code which is the reflective model of LISP written in itself. All the important details of LISP semantics and the guidelines for how to make a LISP interpreter are in that half page. It is this aspect—this meta-reflective aspect—that to me, is the saddest thing about what is happening with Java. When Java first happened I thought, Well, it's legitimizing something that most people have not believed in for a long time, which is this byte-code approach of being multi-platform like we had at Xerox PARC.—It's not a new idea. It actually goes back into the sixties. But when I looked at Java, I thought, My goodness, how could they possibly—and of course, we know that the history was more about programming toasters, originally, than being on the Internet—but, my goodness, how do they hope to survive all of the changes, modifications, adaptations, and interoperability requirements without a meta-system. Without even, for instance, being able to load new things in while you're running. The fact that people adopted this as some great hope is probably the most distressing thing to me, personally, as I said, since MS-DOS. I mean, it represents a real failure of people to understand what the larger picture is, and is going to be. Next slide.

This notion of meta-programming. Lots of different ways of looking at it. One of them is that, any particular implementation is making pragmatic choices, and these pragmatic choices are likely not to be able to cover all of the cases, at the level of efficiency, and even at the level of richness required. Of course, this is standard OOP lore. This is why we encapsulate. We need to hide our messes. We need to have different ways of dealing with the same concepts in a way that does not distract the programmer. But in fact, it is also applicable, as the LISP people found, and we at Xerox PARC found; you can also apply it to the building of the language itself. The more the language can see its own structures, the more liberated you can be from the tyranny of a single implementation. I think this is one of the most critical things that very few people are worrying about in a practical form. One of the reasons why this meta stuff is gonna be important, in such a way that nobody will be able to ignore it, is this whole question of, How do we really interoperate on the Internet five and ten years from now. I don't believe Microsoft is going to be able to capture the Internet. I think it's too big. I think there are too many people supplying ideas into it, and I think that people are going to be sophisticated enough to realize that an IBM or a Microsoft type solution is simply neither called for nor possible. What that means is that there's going to be dozens and dozens—there almost already are—dozens and dozens of different object systems, all with very similar semantics, but with very different pragmatic details. If you think about what a URL actually is, and you think of what an HTTP message actually is, and if you think of what an object actually is, and if you think of what an object oriented pointer actually is, I think it should be pretty clear that any object-oriented language can internalize its own local pointers to any object in the world, regardless of where it was made. That's the whole point of not being able to see inside. A semantic interoperability is possible almost immediately by simply taking that stance. This is gonna change, really everything. Things like JavaBeans and CORBA are not gonna suffice, because at some point one is gonna have to start really discovering what objects think they can do. This is going to lead to a universal interface language, which is not a programming language per se. It's more like a prototyping language that allows an interchange of deep information about what objects think they can do. It allows objects to make experiments with other objects in a safe way to see how they respond to various messages. This is going to be a critical thing to automate in the next ten years. Next slide.

[The slide is showing the book cover: TheArtoftheMetaobjectProtocol]

So here's a great book. How many people have read this book? When they wrote this book, I called them up and I said, This is the best book anybody has written in ten years, but why the hell did you write it in such a LISP centric, closed club centric way? This is a hard book for most people to read. If you don't know the LISP culture, it's very hard to read. If you don't know how CLOS [the Common Lisp Object System] is done, it's a very hard book to read, but this book has some of the most profound insights about, and the most pratical insights about OOP, than anybody has done in the last many years. I really commend it to you. If there are any university professors here who would like to get the next [unintelligible] balloon, I will give it to anybody who rewrites that book so that the general object-oriented community can understand it. It would be a great service to mankind.

What happened in most of the world, starting in the seventies, was abstract data types, which is really staying with an assignment centered way of thinking about programming. In fact, when I made this slide, C++ was just sort of a spec on the horizon. It was one of those things, like MS-DOS, that nobody took seriously, because who would ever fall for a joke like that. [Laughter] Next slide, please.

Actually, my favourite C++ story is, at Apple, there is this operating system, remarkably, coincidentally named Pink. [Smiles] [Laughter] It was so great! There are two interesting features of this operating system that they were working on. One was, it was always going to be done in two years. [Laughter] We have known some really great operating system designers over the years, and I do not know of any decent operating system that has ever been done in two years, even by people who had ten times the IQ of the pink people. The other thing about it was, it was gonna be done in C++ for efficiency. [Smiles] [Laughter] Oh, let's not do it in Smalltalk, that's too slow! Let me tell you, there's nothing more inefficient than spending ten years on an operating system that never works. [Laughter and applause] Actually, the worst ones are the ones that appear to work. [Laughter and applause]

Let's take our pink plane, and we can also use this McLuhan quote—my favourite McLuhan quote—"I don't know who discovered water, but it wasn't a fish." He meant us as the fish, and he meant water as our belief structures—as our context. If you had to pick one cause, of both particular difficulty in our field, and also a general difficulty in the human race, it's taking single points of you and committing to them like they're religions. This happend with Smalltalk. There's a wonderful quote by Schopenhauer, a German philosopher of the nineteenth century, who said, "Every idea goes through three stages. First, it is denounced as the work of madmen."—This is what Swift called "A Confederacy of Dunces"—and then later, it's remarked as being totally obvious the whole time, and then the last stage is when the original denouncers claim to have invented it. [Laughter] That's when it gets in its religious stage. To me, the most distressing thing that happened to Smalltalk when it came out of Xerox PARC, was, for many respects and purposes it quit changing. I can tell you, at Xerox PARC there are four major versions—completely different versions of the language—over about a ten year period, and many dozens and dozens of significant releases within those different versions. I think one of the things we liked the most about Smalltalk was not what it could do, but the fact that it was such a good vehicle for bootstrapping the next set of ideas we had about how to do systems building. That, for all intents and purposes—when Smalltalk went commercial—ceased. Even though there is a book—the famous blue book that Adele and Dave wrote, that had the actual code in it for making Smalltalk interpreters and starting this process oneself—almost nobody took advantage of this. Almost no university took advantage of it. Almost no commercial [unintelligible] took advantage of it. What they missed was, to me, the deepest thing I would like to communicate with you today, and that is we don't know how to design systems yet. Let's not make what we don't know into a religion, for God's sake. What we need to do is to constantly think and think and think about what's important. We have to have our systems let us get to the next levels of abstraction as we come to them. The thing I am most proud of about Smalltalk, pretty much the only thing, from my standpoint, that I am proud of, is that it has been so good at getting rid of previous versions of itself, until it came out into this world.

One of the reasons we got involved in doing Smalltalk again, after, for me, it was sixteen years of not working on programming languages. A couple of years ago we started this project called Squeak, which is simply not an attempt to give the world a free Smalltalk, but an attempt to give the world a bootstrapping mechanism for something much better than Smalltalk, and when you fool around with Squeak, please, please, think of it from that standpoint. Think of how you can obsolete the damn thing by using its own mechanisms for getting the next version of itself. So look for the blue thoughts!

I was trying to think of how I could stop this talk—because I'll go on and on—and I remembered a story.—I'm a pipe organist, and most pipe organists have a hero whose name is E. Power Biggs. He kind of revived the interest in the pipe organ, especially as it was played in the seventeenth and eighteenth centuries, and had a tremendous influence on all of us organists. A good friend of mine was E. Power Biggs' assistant for many years back in the fourties and fifties. He's in his eighties now. When we get him for dinner, we always get him to tell us E. Power Biggs stories. The organ E. Power Biggs had in those days for his broadcasts, was a dinky little organ, neither fish nor foul, in a small museum at Harvard, called the Busch-Reisinger Museum. But in fact, all manner of music was played on it, and one day this assistant had to fill in for Biggs, and he asked Biggs, well what is the piece played, and he said, Well I had programmed Caesar Franck's heroic piece—and if you know this piece, it is made for the largest organs that have ever been made. The loudest organs that have ever been made, in the largest cathedrals that had ever been made, because it's a nineteenth century symphonic type organ work, and Biggs was asking my friend to play this on this dinky, little organ.—He said, But how can I play this, on this? Biggs, he said, Just play it grand. Just play it grand. To stay with the future as it moves, is to always play your systems more grand than they seem to be right now. Thank you.