What the hell is Blub? I take from context that it may be a pejorative generalist term for programming languages that encourage writing pablum instead of programs. Or maybe it really is a new language out there that has a huge following I'm unaware of.

Blub falls right in the middle of the abstractness continuum... As long as our hypothetical Blub programmer is looking down the power continuum, he knows he's looking down. Languages less powerful than Blub are obviously less powerful, because they're missing some feature he's used to. But when our hypothetical Blub programmer looks in the other direction, up the power continuum, he doesn't realize he's looking up. What he sees are merely weird languages... Blub is good enough for him, because he thinks in Blub.

The interesting things about this paradox is that almost any language could be Blub. The pre-requisites for a language being Blub are (a) there is at least one language less powerful than Blub, (b) there is at least one language more powerful than Blub, and (c) that there be at least one programmer using Blub who accepts (a) but refutes (b) because he or she cannot see how the more powerful language is more powerful. She does not think in the idioms that the more powerful language affords.

The Little MLer introduces ML (and Ocaml) through a series of entertaining and straightforward exercises leading up to the construction of the Y Combinator.

ML and OCaml introduce powerful strong typing and type inference. Both are great languages to learn: you will stretch your understanding of defining types and writing correct programs.

When I use the term, I am thinking of the language and also the programmers around it. Could Java be Blub? Sometimes, possibly often, but only when I'm thinking about Java programmers who dismiss Ruby's features as unnecessary. Could Ruby be Blub? Sometimes, but only when I'm thinking about Ruby programmers who dismiss macros as unimportant.

Could Lisp be Blub? I suspect that Erlang and Haskell programmers might say that it is, provided we can find a Lisp programmer who feels that all progress in programming languages stopped when Common Lisp was standardized.

So... I use the term "Blub" to refer to a programming language in the context of intransigent programmers who feel that their chosen tool is the best tool possible.

Programming consists of overcoming two things: accidental difficulties, things which are difficult because you happen to be using inadequate programming tools, and things which are actually difficult, which no programming tool or language is going to solve.

This provokes a very obvious question: How do we know which things are accidentally difficult and which are actually difficult? Is it only because we haven't discovered the right tool yet?

It's easy to find a Java programmer who believes that all of the Design Patterns in the GoF's book are necessary. She believes that the difficulties of applying those patterns are actual difficulties of programming systems. It is only when she learns a different language that she realizes how the patterns were strongly driven by limitations in Java's object model.

At that point she has an epiphany and understands that what she thought were actual difficulties were merely accidental difficulties. And the line between "accidental" and "actual" moves for her.

No matter how much each us us thinks we know right now, are we nevertheless like this Java programmer, unable to see the difference between accidental and actual differences because we simply haven't discovered a more powerful tool?

I say this because, to me at least, Blub is synonymous with an improper choice of resources. I know this isn't the "canonical" definition of it, but I like mine better. Abstractions are great when they are needed, but using them when they are not is close to Rube Goldberg-oriented programming.

So, are we Blub programmers, under my definition? The scary answer is that most of the time we don't know. Highly specified languages aside (SQL, for example) it is difficult to tell if more abstractive ability, or speed, or better libraries, or _________ would be better for your project.

In the end would Java's libraries prove to be more of a benefit than Ruby's blocks? That almost sounds like comparing apples and bicycles, but it could be a very important question. It also seems like one that is almost impossible to answer in some cases.

It is not logic to say that this or that language is Blub, not even Java, because it depends on the programmer denying the existence (or usefulness) of more powerful programming languages.Actually, Blub may be more of a type of programmers than a programming languages.

It really is pretty easy to tell the difference between accidental and actual difficulties although acknowledging the algorithm is a little depressing. First recall:There are two ways of constructing a software design. One way is to make it so simple that there are obviously no deficiencies. And the other way is to make it so complicated that there are no obvious deficiencies. C.A.R. HoareIf you use the first approach all of your problems are actual. When you use the second approach all your problems are accidental.

Programming languages are not on some mythical scale, with bad ones at the bottom and good ones at the top. Some programming languages are better at some things than others, period. You use the right language for the right job.

Paul Graham has never heard of such a concept. He thinks there is a right way (his way, his language, his concept) and everything else is wrong.

If you use the first approach all of your problems are actual. When you use the second approach all your problems are accidental.

Aha, so if I use the second approach then all the actual problems go away? Sure, I get a bunch of accidental ones but I should be able to handle that. From now on all my programs will be complex so that I can avoid actual problems!

Reginald Braithwaite: Yeah, but Arc is like old Lisps (from what I can tell it's half-CL half-Scheme), not anything different like RLisp or Goo (both try to merge Lisp and Smalltalk/Ruby-style object-oriented programming).I even believe that Arc is likely to be pretty good Lisp when it comes out, but it will be nothing more than Lisp.

Even the most bluby programmer of Blub has a few ideas how Blub could be improved, but the changes tend to be pretty small.

Programming languages are not on some mythical scale, with bad ones at the bottom and good ones at the top. Some programming languages are better at some things than others, period. You use the right language for the right job.

This seems to argue that the continuum is local, not global.

Let's day that is true. Within a particular problem domain, some languages are better than others, are they not?

And is it not possible that a programmer in that problem domain is using a language in the middle of the continuum for that domain and does not accept that the better languages for that problem domain are actually better?

So: I wonder if your argument is orthogonal to the existance of Blub programmers?

LISP is at the top of the scale. The reason why is simple, the features of new "innovative" languages can all be implemented in LISP, without changing the language.

I'm not disagreeing with you, but do know that Java programmers say much the same thing about Java?

For example, before the latest discussion of adding syntactic sugar for closures, Java programmers felt that you could represent functions with singletons and/or anonymous inner classes.

I guess it boils down to this: you will explain that the Java people are wrong, because Lisp contains X and Java does not contain X, and simulating X with Java idioms is not the same thing as implementing X properly.

And I will ask you: why do you think there isn't another language out there that has a feature Y, and although you can implement Y in Lisp using Lisp idioms, the Y programmers will claim that implementing Y with Lisp idioms is not the same thing as implementing Y "properly"?

I would argue that Paul Grahams characterization, as funny as it is, is meaningless and even worse: it indicates a bad separation of concerns. That a language is particularly suited for some kind of self-referential gimmicks does not qualify it a good application programming language and won't make it superiour than a pair of a language and a meta-programming system such as OCaml and Camlp4 that are closely related to each other. In the end its easy to put (Ocaml, Camlp4) or (Python, EasyExtend) at the top of everything ( however different they are ) and forget about Lisp. Saying this I recommend doing blubb, blubb with Blub and its so called paradox.

it's easy to put (Ocaml, Camlp4) or (Python, EasyExtend) at the top of everything ( however different they are ) and forget about Lisp. Saying this I recommend doing blubb, blubb with Blub and its so called paradox.

You show some knowledge of meta-programming and its applicability to application developement. Now honestly: haven't you met people who say that these features are irrelevant, that their "popular" language is "Good Enough"?

One of the reasons I believe in the paradox is that I've lived through it going through several incarnations. I remember when people dismissed OOP as needless junk, unimportant for serious programming and too slow.

I remember when people dismissed GC memory as to slow, and a crutch for incompetent programmers rather than a means of freeing up valuable programming time for high priority thinking.

Today people tell me that dynamic meta-programming is not neccesary for "Enterprise" work. I won't take a stand and claim that Ruby, Lisp, or the languages you mention are the wave of the future, but it's pretty obvious to me that the people dismissing meta-programming are Blubbering :-)

You show some knowledge of meta-programming and its applicability to application developement. Now honestly: haven't you met people who say that these features are irrelevant, that their "popular" language is "Good Enough"?

Of course there are programming language conservatives as there are conservatives in any other area of life. But even those people usually don't dismiss any progress but simply find language customizations as usefull as adapting english grammar for having a conversation with different people on the street. So it is basically about designing programs and share knowledge across teams / a programmers community. Java people don't strive for "growing the language" but the library and argue for consistency of its design although I suspect they simply refer to their own established conventions and language-games. As we know these include also IDEs and XML that are external to the language but are considered as part of the languages eco-system. Maybe we shall reverse the perspective and call the Grahamian macho programmer as restricted and narrow-minded ( as has been done in general culture by the typical male chauvinist super-hero)? He is ignorant to programming language eco-systems simply because he advocates a language that pretends to "have it all" - as if you enter the design space of Haskell by adding some lazy evaluation operator to Ruby or Lisp ...

And I will ask you: why do you think there isn't another language out there that has a feature Y, and although you can implement Y in Lisp using Lisp idioms, the Y programmers will claim that implementing Y with Lisp idioms is not the same thing as implementing Y "properly"?

Because Lisp doesn't force any idioms onto you. Lisp, in its most general sense, is a uniform notation for programs. That's it. Understood as a language family rather than a specific language (say, CLOS), Lisp doesn't imply any idioms at all. I can't think of any idiom that can't be expressed in Lisp.

You may now be going to argue that whenever you design new idioms in Lisp notation, you're actually creating a new language. True, but the one thing that distinguishes creating languages within Lisp from creating them outside of it, is that languages embedded into Lisp can be integrated with it as well as with each other extroardinarily easily. Plus you get the uniform syntax for free, which makes reading the new language very easy for Lispers.

Extending a Lisp program and designing Lisp-based languages is the same thing. That's what makes Lisp special.

Oh, and yes, you can implement a Smalltalkesque, message-passing object system for Common Lisp. But why would anyone want to do that? CLOS is a proper superset of that kind of system.

First, thank you for pointing out Lisp's greatest strength, namely that it is a generalized programming language (Scheme especially and almost absurdly so) and it builds industrial-strength "domain-specific languages" right out of the box.

I will point out two caveats:

First, while you can represent any program and any idiom in Lisp, you do so using Lisp's syntax-free s-exprs.

This is its greatest strength and its greatest weakness. There are some benefits to syntactic sugar.

For example, Python has a strict whitespace rule. This is diametrically opposed to Lisp's philosophy.

I'm not saying Python is better than Lisp, but clearly some people value the syntactic magic of significant whitespace.

A true Lisper says "it matters not, once a program is parsed it is an abstract syntax tree, and a Lisp AST looks like a Python AST." But a Python lover says that the readability of the program before parsing does matter.

I'm not saying that Python is better, or that the significant whitespace (which irritates many programmers) is a good idea.

But I am asking whether all idioms really can be reduced to ASTs and implemented using Lisp's Bottom Up Programming without losing their character?

It's easy to find a Java programmer who believes that all of the Design Patterns in the GoF's book are necessary. She believes that the difficulties of applying those patterns are actual difficulties of programming systems. It is only when she learns a different language that she realizes how the patterns were strongly driven by limitations in Java's object model.

GoF predates Java by a couple of years and uses C++ and smalltalk in code examples. The patterns are a good fit for Java's limitations, because they are so similar to those of C++.

I think the problem with asking these questions is that whether language A is more powerful than language B is not clearly defined; in theory we can move from language A to language B by implementing a compiler for B in A, so languages A and B are equivalent in power (so long as they are "Turing complete"). But in practice something is wrong with this statement, so one could reasonably propose that there is no good existing model for the power of languages in the sense that you are trying to ask about. Thus I think the only real answer to your question would be to develop a model so that we can meaningfully talk about the power of programming languages.

A simple model could use language subsets, and we could say A is a subset of B iff there exists a mapping from every program a in A to equivalent program b in B s.t. the mapping is a CFL when encoded as a language as a set of tuples (a, b). But I think this model is flawed because I'm guessing that it will show that all conventional programming languages which are "Turing-complete" at compile time are subsets of each other and all other conventional programming languages are subsets of the "Turing-complete" ones.

So I'm guessing that a better model is needed, and I'm not sure exactly what the better model would be. Is it possible to capture the practical aspects of whether a language A allows you to typically get away with writing an order or three orders of magnitude less code than language B? I think this theoretical exercise may be solvable, and it's definitely worth trying (is there any literature on this?) as it would allow us to get a much better handle on what a "powerful language" is.

Oops, I guess I can't edit posts. Ignore that talk about mappings and CFLs as it is wrong (one can just implement a compiler in one language and call the compiler in the other language to show that any Turing-complete language is a "subset" of any other language, and once the compiler is written, the translation process between language A and B requires nothing more powerful than a DFA). Don't know what I was thinking. But it still seems like we should be able to define what we mean when we say "a language is powerful" or else point out some theoretical reason why we can't define this.

I know I'm very late to this discussion, but one thing I think a number of people miss is that blub languages are optimal given the constraint of a person's or group's knowledge.

If I wish to do an achievable programming task X, then there's both a best possible language (given my knowledge of programming languages) that I know that can X (in other words, given what I know, I'll be able to pick out a language that fits what I know and satisfies my project objectives as best I can).

Then I can extend this list of programming languages to those I've heard of. That is, with some effort, I may be able to find a better language (as far as I can tell) for the task X. As I see it, the blub condition occurs when I see no language better than the one I already know.

As I see it, in the absence of costly experimentation, the blub language is optimal. I cannot come up with a better language unless I look around and try out some other languages. And there's no guarantee that I'll find something better for my purposes that will justify the effort I put into the search.

There's no doubt: Clipper,VB6, THEOS Basic and PHP are paradigmatical Blubs... I recognise them when I see them. You won't find just one programmer in those languages wondering what's the big deal with Java/C#/Ruby... MOST of them Blub-think.