The moment came about two years ago, in the middle of a lecture on biologically-inspired algorithms (EC, ant-colony optimization, etc.) My attention had strayed so very briefly – and yet the material immediately ceased to make sense. It seemed obvious that continuing to follow the proof on the board was futile – the house of cards inside my head had tumbled down. Dark thoughts came: if my train of thought is so easily derailed, what am I doing in the thinking business? The answer “nothing else has come remotely close to making me happy” won’t pay the bills. Floor, ceiling, and proofy whiteboard swirled together as I continued in this misery. It was then that I suddenly realized exactly what had led me to pick up programming when I was young, and to continue studying every aspect of computing I could lay my hands on. It was the notion that a computer could make me smarter. Not literally, of course – no more than a bulldozer is able to make me stronger. I thirsted for a machine which would let me understand and create more complexideas than my unassisted mind is capable of, in the same way that heavy construction equipment can let mediocre biceps move mountains.

What, exactly, has the personal computer done to expand my range of thinkable thoughts? Enabling communication doesn’t count – it makes rather light use of the machine’s computational powers. From the dawn of programmable computing, AI has been bouncing back and forth between scapegoat and media darling, while IA steadily languishes in obscurity. It is very difficult to accurately imagine being more intelligent than one already is. What would such a thing feel like? It is easier to picture acquiring a specific mental strength, such as a photographic memory. The latter has been a fantasy of mine since early childhood. Improving long-term memory would give me a richer “toybox” for forming associations/ideas, whereas a stronger short-term memory might feel like an expanded cache.

Before the lecture was through, I had formed a very clear picture of the mythical photographic memory simulator. With only a few keystrokes, it would allow me to enter any thought which occurs to me, along with any associations. The latter set would be expanded by the program to include anything which logically relates to the entry in question. As the day went on, the idea became less and less clear in my mind, until what once appeared to be a thorough understanding of the problem and its solution had mostly vanished. All that remained was a set of clumsily scribbled notes.

Later, I discovered that the idea was notnew at all. This did not surprise me. What surprised me was the fact that none of the attempted solutions had caught on. What I found ranged from the hopelessly primitive to the elephantine-bloated, with some promising but unfinished and promising but closed-source ones mixed in. None of the apps featured expandability in a homoiconic language, even by virtue of being written entirely in one. From my point of view, the lack of this feature is a deal-killer. I must be able to define programmatic relationships between datums, on a whim – plus new sytaxes for doing so, also on a whim. There must be no artificial boundary between code and data, for my thoughts are often best expressed as executable code – even when entirely unrelated to programming.

Thus I began work on Skrode – named after the combination of go-cart and artificial short-term memory used by a race of sentient plants in a well-known novel. The choice of languages came down to Common Lisp vs. Scheme, as the Lisp family is the only environment where I do not feel caged by a stranger’s notions of what programming should be like. I’ve always felt CL to be ugly and bloated with non-orthogonal features. Scheme, on the other hand, is minimal to the point of uselessness unless augmented with non-standard libraries. Neither CL nor any of the existing Schemes seemed appetizing. What I needed was a Lisp system which I could squeeze into my brain cache in its entirety – thus, practically anything written by other people would not fit the bill.

By this time, I had come to believe that every piece of information stored on my computer should be a first-class citizen of the artificial memory. The notion of separate applications in which arbitrarily divided categories of data are forever trapped seemed more and more laughable. Skrode would have to play nicely with the underlying operating system in order to display smooth graphics, manage files, and talk TCP/IP. Thus I set to work on a Scheme interpreter, meant to be as simple as possible while still capable of these tasks. This proved to be a nightmarish job, not because of its intellectual difficulty but from the extreme tedium. None of the mature cross-platform GUI libraries play nicely with the Lispy way of thinking about graphics. (Warning: don’t read Henderson’s paper if you are forced to write traditional UI code for a living. It might be hazardous to your mental health.) I learned OpenGL, and found it to be no solution at all, for the same reasons.

The wonders of the Lisp Machine world and the causes of its demise have been discussed at great length by others. The good news is that CPU architectures have advanced to the point where the fun can start again, on commodity hardware. I have worked out some interestingly efficient ways to coax an AMD Opteron into becoming something it was never meant to be.

Skrode remains in cryostasis, and awaits the completion of Loper – an effort to (re)create a sane computing environment.

Learning an API does not have to feel like dealing with a Kafkaesque bureaucracy.

19 Responses to “Intro, Part II.”

[...] Web Service – ReadWriteStartreddit Who moved my ‘Delete’ key? Lenovo did. Here’s why. Loper OS: every layer of abstraction by anyone else is crap "While most of the world’s attention is focused on thebeatings in the streets of Iran [...]

‘While it was possible to write portable programs in Scheme
as described in Revised5 Report on the Algorithmic Language
Scheme, and indeed portable Scheme programs were
written prior to this report, many Scheme programs were
not, primarily because of the lack of substantial standardized
libraries and the proliferation of implementationspecific
language additions.’

hrrm, language design is pretty subjective in this area. people still use fortran for a reason, it was designed to do what it does. you should really take a look at squeak’s design if your’e interested in an OS design that is centered entirely around a single language. this has been the paradigm in smalltalk since it’s inception, and as you pointed out symbolics used to produce hardware, so it’s not entirely new. but i would suggest that UNIX is the most popular OS design atm, and i suspect that most people at some point conclude that UNIX is also a very decent language design (assuming you look at it from this viewpoint).
it sounds like your main focus is on Intelligence Amplification in environment and i would first and foremost strive to not lose focus of that goal while delving into the details of implementation. stick your head up every once in awhile and make sure you’re heading in the direction you want to be. good luck

So, what you hoped for was “that a computer could make me smarter. Not literally, of course – no more than a bulldozer is able to make me stronger. I thirsted for a machine which would let me understand and create more complex ideas than my unassisted mind is capable of, in the same way that heavy construction equipment can let mediocre biceps move mountains.” To achieve this, you must first study the Gestalt laws and learn to Gestalt-program your own mind. For, we are self-programming Gestalt-computers, Stanislaw. Relax and look at these five patterns while sensing their perceptual force. Note how they are self-defining in any language. Through perceptual induction they govern all the logical deductions that you will ever be able to make:
SIMILARITY | | | | | | |
PROXIMITY || || || |
CLOSURE ][ ][ ][ ]
DESTINY >>>>>>>
CONTINUITY —|—|—|—
Modus Ponens is made of this… To achieve mastery of Gestalt self-programming you must start using the method of mentat computation (after Frank Herbert, of course) that I describe in the paper in this link: http://iro.on-rev.com/Statistical_masking.htm. Then you can return to using electronic computer programming languages, after having established your own Gestalt logical basis.
Regards
Ingar

I have no problem remembering formulae, quotes, and other tidbits quite well after seeing them just once. The objective described in this post was the creation of an automated system for storing and inferring connections between a set of items measured in the gigabytes. Flash cards and the like are of no use here.

Where are you with this goal today? I thirst the same thing as I want the machine to tell me what I *could* have associated if I had enough operational memory capacity. And not even on the global level… just on the personal level of the things I’ve read and saw in the past.

Will I remember that I wrote this and the accompanying thoughts so poorly expressed in these words? Will I remember enough of your writings that I read today to consciously (and thus, maybe, more effectively) inform my thoughts in the future? And why would I need to read all of them instead of just associating Stanislav’s output to be included in my association machine? RSS is such a poor substitute.

Yes, I did, some years ago. Vaguely interesting, and perhaps more than adequate for storing plain/hyperlinked text. But – browser apps do have a tendency to break and malfunction in annoying ways over time as browsers ‘progress.’

Just now after reading all the comments, I’m begining to wonder if maybe I want the same thing just coming from another direction. For some time I’ve wanted a web browser that would remember what I read, when I read it, and how I found it; would keep track of anything I said in response; would let me search through all of that; and would be usable from wherever.

I’m interested in this. I’ll read your progress reports. Do you have pages of things others could potentially do to help?