The impression that the phrase “this interface feature is intuitive” leaves is that the interface works the way the user does, that normal human “intuition” suffices to use it, that neither training nor rational thought is necessary, and that it will feel “natural.” We are said to “intuit” a concept when we seem to suddenly understand it without any apparent effort or previous exposure to the idea.

It is clear that a user interface feature is “intuitive” insofar as it resembles or is identical to something the user has already learned. In short, “intuitive” in this context is an almost exact synonym of “familiar.”

The term “intuitive” is associated with approval when applied to an interface, but this association and the magazines’ rating systems raise the issue of the tension between improvement and familiarity. As an interface designer I am often asked to design a “better” interface to some product. Usually one can be designed such that, in terms of learning time, eventual speed of operation (productivity), decreased error rates, and ease of implementation it is superior to competing or the client’s own products. Even where my proposals are seen as significant improvements, they are often rejected nonetheless on the grounds that they are not intuitive.

It is a classic “catch 22.” The client wants something that is significantly superior to the competition. But if superior, it cannot be the same, so it must be different (typically the greater the improvement, the greater the difference). Therefore it cannot be intuitive, that is, familiar. What the client usually wants is an interface with at most marginal differences that, somehow, makes a major improvement. This can be achieved only on the rare occasions where the original interface has some major flaw that is remedied by a minor fix.

Complaining that a programming language “Violates the Principle of Least Surprise” is a complaint that it fails to appeal at first glance, without any training or investigation. But if a new language is “intuitive,” it can only offer you a marginal improvement over the languages you already know well.

You needn’t look at functional langauges like Scala to see this at work. Is Ruby’s for loop an improvement over Java? By how much? Ruby’s big win over Java in that regard is the ease with which you can use Enumerable’s collect, select, detect, and inject methods. Which, of course, are not familiar to the programmer with a grounding in for loops. They require study to understand. But once understood, they make code easier to read thereafter.

post scriptum:

As Justin points out in the comments, the real meaning of the “Principle of Least Surprise” is that languages should be consistent. The principle applies to how the language behaves when you already know something about it. So if you see (_+_), the questions is whether this makes sense if you already know that underscore is a function parameter.

I am under the impression that the "Principle of Least Surprise" has been taken to mean three completely different things1) Least surprise to a random person. This is what you seem to be criticizing.2) Least surprise to the language designer. That is, a language that works roughly how the designer envisages, without flow on effects that surprise the language designer. This is the stated design philosophy of a number of languages designed initially for their designers' use.3) Least surprise in an information theoretic sense; that is, the conditionalised probability on the portion of language known of the way something has been done should decrease as the portion of the language known increases, and should have a low upper bound after the portion of language known is above some epsilon. This is the principle that I take to be in play, which means that consistency (in an aesthetic sense; that is, where inductive reasoning about the system will eventually supervene upon deductive reasoning) is a priori connected to least surprise, in that they should (if we live in a universe that makes sense, which is not something that I feel we can proveably prove) entail each other.

reginald braithwaite: (3) speaks to the Kantian in me. Just to further elaborate perhaps unnecessarily, what (3) above says is that, give a subset of the programming language greater than epsilon, we can reason about the language inductively with a reasonable likelyhood of being correct; that is, we can expect the language to behave how we think it should, and we will not often be wrong. This does not supervene upon the language being familiar, but on the language being consistent. The conflation comes in as people may incorrectly include their knowledge of other programming languages into their knowledge of the programming language of interest, meaning that anything that is different will be inconsistent, and thus infinitely surprising.

This statement applies to all programming languages, so we should apply it to Lisp. In what unintuitive way should Lisp be improved? What other language beats Lisp (in one area) thanks to a greater willingness to have unintuitive features?

BTW I actually think you're essentially right, and these questions have answers, because intuitiveness today (a function of our existing ideas) is not a reliable guide to what is good.

One of the dangers in arguing about Lisp is the temptation to slip into the Turing Tar Pit where the Lisper claims "You can do that in Lisp by creating a new language based on Lisp."

Lisp is a chameleon!

But I can provide an example for the sake of contemplation: Lazy evaluation (which began in certain Lisp dialects and can be implemented easily in Lisp) provides an improvement in certain FP when elevated to a built-in feature of the language.

This is not to suggest that Haskell is overall superior to Common Lisp, but for certain things, I believe it is an improvement.

I think you may be setting up a strawman here. You can define "intuitive" to mean familiar and then argue it into the ground, I agree, though I rarely hear anyone who I consider reasonable use the term that way.

A more useful definition for "intuitive" is "not easily expressed linguistically." Given this definition, we want user interfaces to be intuitive (in fact, this is the entire reason to have GUIs). A programming language is a user interface. Therefore, we want programming languages to be intuitive.

For example, why do we prefer "foreach" over "for?" It's hard to explain, but we do. It's "intuitively" better. Of course, I can explain why it is better - it more directly corresponds to the concept of iterating over a list, is less verbose, it is unambiguous, it is readable, etc. Then again, I think about programming languages a lot. Your "usual" programmer might not know why they like it better, they just do. They would call it "intuitively better."

In conclusion - "intuitive" does not mean "familiar" except to the layman. Familiar bad, intuitive good.

Dehora used the right term - an intuitive system is one that provides "affordances": visual hints that evoke conventions the user already knows, letting him guess how something will behave without actually trying it.

(For example, a button is intuitive: you press it. A button-like object that needed to be dragged would be unintuitive.)

Now, of course, you're right to some extent - for someone to "get" such a hint, they need to be familiar with the convention it's trying to evoke.

Basically, a system can be unfamiliar but intuitive, if it presents familiar _components_ in a new, but discoverable, way.

"Condition #1:Both the current knowledge point and the target knowledge point are identical. When the user walks up to the design, they know everything they need to operate it and complete their objective.

Condition #2:The current knowledge point and the target knowledge point are separate, but the user is completely unaware the design is helping them bridge the gap. The user is being trained, but in a way that seems natural."

In this scenario a well designed language can be more different from the status quo, and therefore more "better", while still not scaring people off than one that ignores this principle.

I always found "violating the principal of least surprise" to be very important BUT within the context of the current language, not within the context of all languages. Look at Java, for example, it's riddled with inconsistencies. Sometimes it's .size() and sometimes it's .length() and sometimes it's .length, and don't even get me started about the damn Calendar class.

Different classes performing the same conceptual operation within a language should use the same syntax to do it, thus not "surprising" you every time and making you constantly have to check the docs (or rely on the crutch of your IDE) for the less frequently used classes.

This familiarity is an incredible feature and doesn't hamper innovation in any way. You're free to add new things that act in new ways, but if you're doing the same thing then don't surprise me with a different API for it.

Just to nitpick a bit, it is possible to be radically different and yet more familiar/intuitive. The original mac was more familiar/intuitive because it used metaphors from a completely different domain that the user was already familiar with. Similarly, languages with an algebraic syntax are more familiar/intuitive to those who have been taught that syntax in school.

Intuition comes from being able to make decisions without conscious processing, which relies on the brain's pattern matching ability, and those patterns have to be learned. Something may appear intuitive at first if you learned the patterns, or they were easy to grasp. When you look at a mouse, it's not intuitive how it would be used: do you wave it in the air? point it at the screen? But once you know, the pattern is easy and immediate that it quickly becomes intuitive.

Familiarity is not sufficient for something to be intuitive. It can be familiar yet with no clear patterns emerging, take an incredible amount of time to become intuitive. And intuitive can be wrong, sometimes deadly wrong. Our intuition tells us, when we approach a turn too fast to step on the breaks, which matches our experience in most driving situations. Not so with a bike, slowing down forces a wider turn, if you're approaching too fast -- step on the gas.