Someone was once phone-screening me for a job in a start up, and they asked me to name my two strongest languages. At the time, the answer was Java and C++, partly because I had just finished writing Java development tools in C++, and partly because while I loved Scheme and Smalltalk, I knew that anyone using them in day to day production would quickly expose me as a dilettante if I were to answer with my heart instead of my head.

After I said “Java and C++,” the follow-up was interesting. The interviewer asked me: If you could change both languages, name one feature from Java that you would add to C++, and why. And name one feature from C++ you would add to Java, and why.1

Today I would answer that question differently, of course, but I bring it up because there is a feature of Common Lisp, Arc, and C++ that I miss whenever I use a conventional object-oriented language like Smalltalk or Ruby: lvalues and references. In short, what I miss is the ability to redefine the assignment operator.

Or do I? Let’s have a look at what that means and why I miss it.

Defining a = b

In Ruby, you have a limited ability to define the assignment operator. Specifically, you can define methods that look like fields or instance variables:

Which binds :fizzbuzz to the local variable something_else. But that’s really it. All uses of “=” in Ruby are one of these cases: binding a variable, calling “[]=,” or calling a named method on an object and passing one parameter.

So?

Well, there’s something I would like to do in Ruby that simply cannot be done. If I have an array I make from a range, say (1..100).to_a, how do I double the odd numbers in that array? Obviously, I can write things like

But imperative loops are yucky. I could write it a little functionally:

a = a.map { |n| n % 2 == 0 ? n * 2 : n }

But that creates a new array and overwrites my variable. Besides the fact that it seems wasteful, this is absolutely the wrong solution if the original array is shared elsewhere in my code: I am not changing the original array, I am merely binding a local variable to a different array. Try writing a function called “double_the_even_values!(arr)” (that doubles the values of arr), and you’ll see what I mean.

There’s a reason it doesn’t work that way. Never mind whether Matz was copying some of Smalltalk’s assignment semantics when he designed this piece of Ruby, let’s talk about if statements and cases. Once upon a time, people wrote programs with lots of explicit branches in their code. If this is the case, do this. If that is the case, do that.

The problem with languages like Ruby and Java is that everything is an object and every action is a message, except when it isn’t.

One of the big ideas in OOP was polymorphism. When you program with interfaces, a lot of the logic of what do do for each case goes into your class hierarchy: that’s what method dispatching is all about. This is great, because when you want to do something new, the whole idea is that you create new classes with interfaces just like your existing classes and they just work.

The Reasoned Schemer takes us into the world of logic programming, a world where we take fairly ordinary logical functions and reason with them, building higher-order control structures like backtracking out of simple functions.

Wait a second. Building higher-order control structures out of simple functions? This sounds suspiciously like working with Monads… The Reasoned Schemer contains a concise and remarkably easy introduction to decoupling control flow issues like backtracking and error handling from program logic issues. Recommended!

Okay, we can stop smoking the good stuff, nothing just works and OOP is not a silver bullet. But the key idea is that by moving some of our logic into our objects, we can write more generalized code elsewhere, code that does not need to know what an object is doing inside itself.

Let’s look at a different paradigm: Higher-order functional programming. In this paradigm, functions are first-class values: you can write functions that take functions as arguments and/or return functions. Again, there is a de-coupling: if you write a compose function that creates a new function out of two or more other functions, it does not need to know what those other functions do, just how to wire them together. Monads are based on this premise as well, on the premise of writing generalized functions that do not need to know the intimate details of the things they consume and produce.

The problem with languages like Ruby and Java is that everything is an object and every action is a message, except when it isn’t. The things that are objects and messages are remarkably flexible and work well. The things that aren’t objects and messages are like case statements in older programs: places where you cannot extend the language without dealing with thousands of these case statements that are all coupled to each other in unexpected ways.

In Ruby’s case, the rules for what it means when you write expression1 = expression2 are hard-coded into the parser like a case statement. You cannot create a select= method that modifies its receiver. (Actually, you can define the method. Good luck calling it without resorting to send.)

But now we have a serious inelegance: why do three kinds of getting and setting values use one syntax, but the fourth uses a different syntax?

How Common Lisp handles this

Common Lisp has something called a Generalized Variable (also called a generalized reference).2 In OO terms, it’s an object that has a setter and a getter method, although you can call them the access and update operations if you prefer.

Common lisp has a macro, setf that performs an update. In short, if you call:

(setf expr1 expr2)

Common Lisp does something analogous to:

expr1.update( expr2.respond_to?(:access) ? expr2.access() : expr2 )

Let’s pretend that Ruby did this. This would be very interesting! For starters, instead of writing an accessor this way:

Now whenever you write some_bar.something you would get a generalized variable and Ruby would figure out when you need its value and when you need to assign something to it. Of course, that looks like a lot of boilerplate, but you could rewrite the existing attribute class methods to do this for you. (Actually, this exact code wouldn’t even come close to working because @something would belong to the new object, but squint at it sideways and pretend we can do that.)

You could then write your select method to return either a single generalized variable or a collection of them, depending on how you want your semantics to read. For example, you could choose which of the following ought to work:

The reason why this is interesting is that the original language designer does not need to worry about whether people want to write select methods that can update their host collections. The language designer merely needs to write things in a sufficiently general way, and people will work out the implications for themselves.

Returning to Ruby (and a lot of other languages), now we see why everything being an object and everything that happens being a method invocation “except when they aren’t” is a problem: Assignment is one of those things that isn’t, and because of that, there are places where our programs aren’t going to be consistent with the language’s philosophy. If assignment was always a method invocation, I suggest that there would be a way to write a select that modifies its receiver.

Of course, there would also be things like this:

2 = 3

Which would now pass blithely though the parser only to raise a NoMethodError at run time because numbers do not implement an update method (At least, not in Ruby. Legend has it that this was legal in early versions of FORTRAN. But those systems were built by Real Programmers who eschewed type safety, quiche, and semiconductor memory).

I think it’s a good thing when languages are highly consistent, in the vein of Scheme, Smalltalk, and Forth. Although you cannot redefine “:=” in Smalltalk for select methods, you can’t redefine it for anything else either, so at least we’re clear that you can never do it, not do it some of the time. Such languages are remarkably consistent, they are quite stable, and although they may be unfamiliar to someone approaching them from a different paradigm, they are highly discoverable.

Turtles, it’s always the damn turtles

Like any program, languages evolve over time as people realize they need new features. A lot of the difference between languages boils down to a basic decision every implementor has to make when deciding how to add a new feature:

Should I add this feature? Or should I make this feature possible?

Adding a feature is like adding a new case to an if statement. You want List Comprehensions? You’ve got it. Syntactic sugar for closures? Done.

Making a new feature possible is the deeper, more challenging option. It involves rewiring the way the language works such that the new feature becomes a special case of a more general language capability. So instead of modifying Ruby to support select=, you add generalized variables, which make select= possible.

Some people will argue with this,3 but I think languages where “It’s turtles all the way down” are better than languages that are exceeding well thought-out and painstakingly chosen collections of features that amount to a heap of if statements and special cases for everything.

Turtles all the way down is a philosophy where you try to define a ridiculously minimal set of orthogonal axioms and build the language by combining those axioms. At the lowest level it feels like pure math: Combinatory Logic can be built out of three, two, or even one combinator if you are careful. Scheme has five special forms. But with the right abstractions and syntactic sugar on top, you can produce a language with an amazingly diverse feeling to it.

But the advantage of having everything in the language built on top of a small set of axioms is that when you want to make a new feature possible, it’s often just a case of burrowing down a level of abstraction and combining the existing things in new ways. For example, it could be as simple as rewriting select to return generalized variables instead of values… if Ruby really treated “=” as a method instead of treating it as a special case that is sometimes a method and sometimes something else and sometimes a third thing.

Of course, you can make new things possible in languages that aren’t turtles all the way down. But your new things never feel like they fit with the things already in the language, just as pairing select and update doesn’t fit naturally alongside something, something=, [], and []= in Ruby. And that means that the thing you build aren’t as discoverable or as readable as they could be.

I started by saying that I like the way you can define lvalues in C++, exactly because you can define methods like select that return generalized variables. But following that thread, I ended up with the conclusion that what I really like is a language with as few special cases as possible, a language which feels like everything is built out of a small set of well-chosen primitives or axioms that I can combine and recombine to create new things that work just like the old things.

A language where it’s turtles all the way down.

Now this essay is about turtles, so I am probably insane for starting with an anecdote that has every potential of blowing up into a debate about the right way to interview people. So I’ll just say a couple of things about that: First, it’s a screening question. If you ask it and then argue over the phone with the candidate about whether adding the const keyword to Java is a good idea, you are missing the point. The point in a screening question is to weed out people who are flat out lying about their experience, not to decide if they happen to be your intellectual and cultural clone. The moment they demonstrate that they actually have some non-trivial experience with the two languages, you are done.

Second, language nuänce questions have limited value. This is because while there is a correlation between tools knowledge and experience, the argument for causality is tenuous at best. Meaning, people with non-trivial experience usually know a lot more about their tools than people without experience. However, it is possible to learn a lot about a tool without actually having real-world experience. And maybe somebody knows a lot about a tool but doesn’t know the answer to the specific trivial question you want to ask. So… a question like this has some utility as a filter, and perhaps it is useful in a longer interview if you use it as an excuse get the candidate to talk about how their proposed changes would have helped them with their actual projects, which leads them into talking about their experience.[back]

Now that we've got that out of the way, I think it would be only fair to recognize that the Lisp 'setf' form is, in fact, only one means of value assignment in Lisp. In my experience, 'setq' is just as commonly used. In fact, most usage of 'setf' expands to more or less exactly the examples you give that are so easy to accomplish in Ruby already: a special 'setter' method for a collection or data-holding class.

Also, to your argument that not allowing assignment to be intercepted prevents Ruby or Smalltalk from being "truly OO", I offer this suggestion: think of assignment simply as aliasing for the convenience of the programmer reading the code. Local variable assignment does not create, modify, or destroy objects; rather, it simply provides a lexicographical clue (accurate or not) about the scope and intended use of some object already created through other means.

Think 'binding', not 'assignment'. The language offers you some syntactic sugar for a commonly-desired task -- binding a new lexical name for an existing value -- without forcing you to wrap that value in an accessor method.

Lately I've been musing about this concept in the reverse direction. It's not enough for the turtles to go all the way down. They must also go all the way up.

Ruby is very fluid for such a complicated syntax (compared to, say, Scheme), and what we see in the Ruby community is a whole bunch of people whipsawing ideas back and forth. Ruby has enough community behind it now that it grows new turtles every day; they are evaluated and evangelized, and the good turtles gradually become standard turtles, like Test::Unit, or Capistrano, or FXRuby.

But consider turtle-friendly languages with smaller communities behind them. Like Forth, or perhaps Lua. Both of these languages make it possible to do OO programming. All you have to do is sit down and write the dispatching and inheritance mechanisms from scratch. Several people have done so, but many people consider this to be an "application" or "framework" written in the language, rather than more of the language itself... and for whatever reason there is insufficient churn to standardize. So those turtles aren't part of the language, don't come with the standard toolkits, and if you want to download and use them, you have to wander off into the weeds and evaluate 5 subtly different versions.

I'm not sure exactly how or why Ruby got it right (and we could probably have a splendid argument about whether or not it actually did get it right) but I feel like it did, and I like it.

Sorry, I was not clear about what I wanted from that function: I have clarified it in the text.

And a "special setter method" is exactly what I don't want. I don't want my programs littered with special cases.

Now for Smalltalk: Smalltalk has clean assignment in that you can't intercept assignment anywhere. I don't consider that a drawback, I will review the text to make that clear.

Ruby, OTOH, has this sometimes-you-can-redefine-it, sometimes-you-can't. In Ruby, writing foo.bar = :bash is NOT a binding, it is always a method call, which is why I have trouble with the idea that foo = bar is not a method call and foo.select { ... } = bar is illegal.

@reginald: I'm not sure that I would give up the readability of 'a.foo = bar' over 'a.setFoo(bar)' in exchange for a bit of semantic cleanliness. The more strict model may be the "correct" one, but the Ruby practice of having setters impersonate assignment leads to some nice encapsulation of data objects, IMHO.

Nor am I entirely certain that I would want to give up the efficiency and simplicity of 'b = a.some_method; b.foo = bar' if it meant that the assignment to 'b' in fact created some new lvalue-box into which the result of the first expression was stuffed. I don't find that it takes a lot of mental gymnastics to keep track of the difference between a local variable and a method call.

I'll admit to being a "dirty" programmer, though. When working in Lisp, I use side-effects, and I've even been known to dust off a global variable or two in my Ruby and Perl code. I suspect that we simply have different motivators when it comes to language design.

The actual problem here is that assignment itself is not object-oriented, resp. local variables aren't. (Without mutable variables the problem disappears anyway.)

When Uedauhes translates a=1 to setSlot('a,1) there is the implicit assumption that a is a slot of the current object (on which setSlot is invoked), not a local variable.

Local variables need to be objects themselves if you want to catch assignments to them, but local objects of that kind are strictly impossible and loosely very garbage-collection-intensive.

But what you want is not actually local variable assigment, but assigning to the return value of a function. In ruby this follows pretty simply from what it does with slots. When you get with 'a', you set with 'a='. Logically, when you get a value with 'lookup(x)', you should modifiy that value with the method 'lookup=(x,y)', invoked by 'lookup(x)=y. You could even extend this to take blocks as well, and then you select trick (first version) would work.

The select{}.each{|x|x=...} is harder, you'd actually need reference types which overload plain assignment, and some hackery to integrate that into ruby.

Also I think it is a good thing that you can intercept slot assigment ('slot=') on the class level. Usually you need slot-specific behaviour that affects the whole class. In C++ you could only make the slot itself intercept the assignment, thus need a class for each such slot, and there you don't even know the surrounding object.(I'd like to, in a persistence layer.)

Hmm, squinting long enough your generalized variable is exactly the same as a reference. :-)

Btw. in your select{...}.each{|x|...} example the last block could just as well store away the x and make the modification quite some time later. Or in other words: What would ...each{|x| y=x; y=1} even mean and how do we know whether x is a reference or a plain value?

Btw. in your select{...}.each{|x|...} example the last block could just as well store away the x and make the modification quite some time later. Or in other words: What would ...each{|x| y=x; y=1} even mean and how do we know whether x is a reference or a plain value?

This is exactly the point of turtles all the way down!

If we are just piling features on top of features we try to figure out how to make a select permit updating without changing anything else anywhere in the language.

The result is one more special case.

If, instead, we allow those references to “escape,” the semantics are different, true. But who is to say that someone won’t think of a really interesting use for these escaping references?

Lua is a great example of a language built from a minimal set of features that can be combined to provide great flexibility:

"Lua is powerful (but simple).A fundamental concept in the design of Lua is to provide meta-mechanisms for implementing features, instead of providing a host of features directly in the language. For example, although Lua is not a pure object-oriented language, it does provide meta-mechanisms for implementing classes and inheritance. Lua's meta-mechanisms bring an economy of concepts and keep the language small, while allowing the semantics to be extended in unconventional ways."

Strangely enough, Smalltalk-72 was also a TAWD'ry language; objects received their messages before any parsing was done on them, and could freely decide how to evaluate them. In that respect, objects were a little like Lisp macros.

For some reason, later evolutions of Smalltalk (I think -76, but -80 for sure) tightened things up, making assignment primitive - probably in order to employ a more efficient bytecoded VM; but then Self came along and reversed it again, taking the (more common these days) approach of having setters and syntax magic, and taking advantage of much-enhanced machine speed and compiler technology to primitivise wherever possible.

Languages where you build up domain-specific or app-specific layers of abstractions result in a situation where I come along and need to read your code and, since I think differently than you, say "What the heck have you done?" and "What does this do?" Because I'll see things that are not defined in the language manual.

This sort of problem always exists, to a degree, but it becomes more intense when a new language, is, in effect, created for each application or each programmer.

Look at the bad reaction Steve Bourne got when in his new Bourne Shell he reworked the C language a little bit with macros.

Poetic flights of intense creativity can backfire when you are implementing systems for a bank or a shoe factory. Modern day financial corporations are computers, networks, and their operators, and if the code is not accessible the company is broken.

Sometimes it is fun to write something so complex and original that none of your friends can understand it. And it's fun to feel smart. This just Temptation to a serious Sin.

Of course, if every literate person in society had been programming the same language, say, Ruby, since the age of 4, then the standard allowed vocabulary would be larger.

Languages where you build up domain-specific or app-specific layers of abstractions result in a situation where I come along and need to read your code and, since I think differently than you, say "What the heck have you done?" and "What does this do?" Because I'll see things that are not defined in the language manual.

While this is absolutely true, it is not true that programs devoid of “things that are not defined in the language manual” relieve the reader of wondering what the heck is going on.

If you replace the abstractions with boilerplate, the reader is left with the daunting task of trying to figure out what a now-gargantuan program does.

Over time, the reader learns what various idioms and patterns are and how to recognize them through repetition. However each one must be reviewed in case it is slightly different.

For example, if you do not use Object#andand, the reader must figure out the difference between foo.bar && foo.bar.bash and (fubar = foo.bar) && fubar.bash and foo.bar.bash rescue nil. All have slightly different semantics.

It's always going to boil down to the age-old problem: choosing between a program where as a reader I invest a little time learning the abstractions to save a lot of time comprehending the program, or choose a program that eschews the up-front investment in favour of taking longer to incompletely understand the program as I read it.

Unfortunately I never managed to find a nice syntax for something similar to a C++ Reference. Reminder: A C++ reference is syntax sugar to make a pointer indirect access to a lvalue look like a direct access. i.e. dereferencing is automatic.