schoppenhauer wrote:Common Lisp is an old language which has a lot of Code and Research. Why just throwing it away?

Ok, this would also just produce a "new common lisp" which may have the same problems in 20 years but then one could just do the same again.

I don't think any Lisp has ever just thrown anything away. Sheesh, every Lisp I know of still has CAR, CDR, and CONS.

Seriously, every Lisp ever designed learns from the experiences of its predecessors (except the first, obviously). IMO, Scheme is a good example of a "reformed" Lisp, which tried to be very true to the spirit of what had gone on before, but also left a lot of the cruft behind. Clojure does something similar, I think, but is more radical in some respects. The problem is that Scheme is minimalist and therefore it's hard to write "real" programs in standard or RxRS Scheme. You can certainly do it in a specific implementation such as MzScheme or Scheme48, but then you're bound to that implementation. CL has some of the same problems when you get to the fringes (networking, threading, etc.), but the fringes with Scheme start pretty quickly.

IMO, a group of interested Lisp folks need to come together and create the spec and reference implementation for a new version of Lisp that leverages the experiences of the past, but cleans things up. I always have this vision of a clean spec, like Scheme, that is also expansive enough to write real programs, like CL but even more so by creating standard behavior around modern requirements like threading and networking, for instance. Clojure is a good example of that, but I'm always turned off by the use of the JVM. Maybe that's just me and I need to spend more time with it. I'm not negative on Clojure generally, and I respect many of the things that Rich has done with it. The pure-functional data structures are very cool (though I haven't seen any good benchmarks about performance), and I can even appreciate the use of the JVM in those environments where your only choice is to run on the JVM (Google AppEngine just announced support for the JVM and my first thought was, "Oh, you could probably run Clojure on that. That's cool!").

The big problem is that nobody with enough "weight" in the Lisp community has the stomach for standards work any more. Everybody reacts like a dog that has been kicked daily for 3 years whenever anybody mentions Lisp standards. There is a big "don't go there" reaction. As a result, CL just stagnates, which is I think what Dan is reacting to.

Lisp seems to be getting more interest in recent years, coming out of the AI Winter. That's good. The problem is that while everybody is interested, when they go to look at the reality of things, they find that Lisp has been static since the early 1990s and doesn't match up well versus the full-featured programming languages being introduced today (things like Perl, Python, Ruby, etc.). It isn't that Lisp can't do the same things, it's that the implementations and libraries are very spotty and it's hard to assemble the environment that you want. Perl is full featured as a base language and it has CPAN if you want more. Python is too, and it has eggs. Ruby, likewise, and it has gems. CL doesn't even have a standard networking layer, and ASDF-INSTALL is brittle and while there are other options (Mudballs, etc.), none yet have critical mass.

I won't even insult you with the code I was using prior to this. I realize this may not be the "fastest" implementation, but it is getting the job done at the moment and I can always optimize later if necessary.

That's an interesting usage of displaced arrays, but it seems pretty costly in terms of garbage generated. Typically, when I call a predicate, I don't expect it to cons. Obviously, if you only call it rarely, that won't be a problem, but just recognize that you're making the GC work overtime.

Which kind of sums up why I'd just as soon embrace the "Fundamental problems with the Common Lisp language", they're not inhibiting me from solving my problems. There are many parts of the language that I don't use, but as I've used it more, I've started to realize why those concepts are there. The current example is LOOP. I've avoided LOOP and found ways to use DOLIST, DOTIMES, DO and DO*. Recently, I kept finding myself using DO and friends in a unaesthetic manner and the rationale for LOOP clicked. Now I need to go learn LOOP. I'm realizing that the number of concepts in CL is beneficial.

Yea, I don't see anybody arguing for too many fewer concepts. Most everything in CL has stood the test of time, and I find myself often saying, "Oh, that's useful," too, but in some cases we have multiple implementations of essentially the same thing (plists vs. alists, for instance). BTW, I would not classify LOOP vs. DOLIST or DOTIMES as examples of this issue. If you're just iterating over a list or for a specific number of times, DOLIST and DOTIMES are more straightforward, IMO. But their simplicity also means they cannot do things that LOOP or DO can. For what it's worth, I prefer LOOP over DO, but I'm starting to get tempted by ITERATE.

ECL is a great implementation with a small footprint, but ECL can't solve fundamental CL issues. Weinreb is talking about things that are intrinsic to the language, not the implementation. The only way to solve those problems is to change the language itself.

"Unimportant features that are hard to implement" seems like a combination of "Too many concepts" and "Hard to compile efficiently", so I don't quite see how that is a separate problem from the previous problems.

Realize that sometimes feature complexity can domino across into other features, too. You end up with situations where you say, "I can't do optimization X because somebody might use feature Y and they aren't compatible." Weinreb is suggesting that since feature Y isn't used very often, it would make more sense to remove it and thereby make it easier for compiler writers to use optimization X. What are X and Y? I have no specific clue, but this comes up all the time in other languages, too, so I assume there are many (e.g. pointer aliasing in a big problem in optimizing C). Displaced arrays were mentioned and I think that they do probably force a couple more indirections when trying to access the data.

"Not portable" is a problem that isn't a problem. It is a problem if portability between implementations is your goal. It isn't a problem if portability between systems is your goal. But portability between systems might be a problem if you choose an implementation that isn't portable. Luckily, there are several implementations available so you can choose one that fits your problem.

Yes, but... if too many things aren't portable, there is little reason to have a specification and standard. Simply allow everybody to implement their own version of "Lisp" and be done with it.

Also, portability is a stunting factor in the development of libraries. If everybody writes implementation-specific code, you end up with balkanized libraries.

"Archaic naming" is a non-issue because CL has "Too many concepts", so you don't have to use the "Archaic naming". I'd like to take this opportunity to suggest starting an annual Common Lisp programming challenge along the lines of the "Obfuscated Code Contest" called the "Archaic Code Contest".

Yea, I don't have too much problem with funky naming. Every language has some funky naming and you just have to get used to it. CAR vs. FIRST isn't that much of a problem for me. The bigger problem is regular naming. For instance, I love Scheme's use of "!" to signify mutation. I'd rather have APPEND and then APPEND! rather than NCONC.

"Not uniformly object-oriented" has actually been a problem for me. But, compared to the conceptual limitations that I've hit my head against in other languages, I can live with it.

Yes, but that's the sort of thinking that keeps anything from improving. In other words, I don't think anybody is arguing that CL is unusable. It isn't. People use it all the time. It's a great language, with a lot of great concepts. The only thing that Weinreb is saying is that it's a great example of a language that has evolved over time and now need some clean up. In the same way that we "refactor" code over time, the same concept could be applied to languages.

Sometimes I think the "Fundamental problem with Common Lisp" is that it is just good enough to enable us to envision the "perfect" programming environment, but not bad enough to motivate us to define and implement it.

Tom wrote:Recently, I kept finding myself using DO and friends in a unaesthetic manner and the rationale for LOOP clicked. Now I need to go learn LOOP. I'm realizing that the number of concepts in CL is beneficial.

gugamilare wrote:It is most likely that at least 70% of complains about CL are just newbies that are having difficulties to adapt themselves to the language. Sometimes (not all times) they are just finding some excuses not to keep learning CL.

Yeah, some reactions are pretty awful, like 'if i read (+ 1 (* 3 4) 5) i have to start reading in the middle and not with 1 + 3 *4 +6' this sort of statement makes me think they do not even know what they're reading.

findinglisp wrote:but in some cases we have multiple implementations of essentially the same thing (plists vs. alists, for instance).

Sometimes, though you need two implementations of exactly the same thing though. Basically hash tables, plists and alists, all do the same thing, except plists and alists are faster for smaller numbers of items.(But making them lists too doesn't make sense to me, conceptually.) The point where which of these implementations is chosen is when you decide the structure, hence the functions, gethash, assoc and getf should be one overloaded function.Another case is sorting methods. There sometimes it is governed by the structure, red-black trees and sometimes they are just lists with different methods of sorting.

But i guess we do not always see the problem clearly enough to have one interface to multiple different implementations for different areas. Or it would take to long to see it. (Or maybe even not possible to make such an interface.) However, i guess a language/standard libraries should give such interfaces when we are able to without punishment.

An example where i do not think we could make a good single interface to multiple 'sorting' rectangles (or should we be more general? shapes?) could do this with quadtrees for instance. When i wanted to do this for simulation i looked at this but i didn't really get it.(Edit: was i stupid, now i see 'delete', 'insert', 'search', not that hard..) Quadtrees can be useful for drawing heightmaps, making easy interfacing hard too. (Btw 'quad-arrays' are inferior to quadtrees; you can make multiple levels of quadtrees in one structure, within which you can find a 'correct' element in the quad tree in one step.)

Thanks for your replies(not to imply not to keep them coming ), nice to know people here are open but not uncritical about improving lisp. On the other hand environment is very important too though, packaging easy installable libraries, utilities and documentation certainly sounds a good idea. I suspect that this causes more loss of people to other languages then problems with the language itself.

Tom wrote:Recently, I kept finding myself using DO and friends in a unaesthetic manner and the rationale for LOOP clicked. Now I need to go learn LOOP. I'm realizing that the number of concepts in CL is beneficial.

I think Iterate is a good point for Common Lisp: The default answer for major changings to common lisp is mostly "why dont you write a library for that" - the fact that "iterate" exists, and can be used instead of loop if you wish, shows how flexible common lisp is.

Jasper wrote:The point where which of these implementations is chosen is when you decide the structure, hence the functions, gethash, assoc and getf should be one overloaded function.

I think it shouldnt be too hard to implement some kind of associated array class which overloads this function. Or at least define functions which do typechecking themselves for that (and use compiler macros for optimizing). Thats the same point: Just write a library "trivial-associative-arrays" that defines a "get"-function and use it and make others use it - if enough people use it, it will be portable since every implementation will want to support it (if its not pure-ansi, though).

And thats why I said: Dont throw Common Lisp away. Keep Common Lisp, but say whats deprecated, tell the people whats "not good coding style", and give "good" alternatives (like the iterate-library which could be recommended).

findinglisp wrote:That's an interesting usage of displaced arrays, but it seems pretty costly in terms of garbage generated. Typically, when I call a predicate, I don't expect it to cons. Obviously, if you only call it rarely, that won't be a problem, but just recognize that you're making the GC work overtime.

What's memory?

When I initially implemented that, I wondered if it was excessively consing, but the HyperSpec gave me the impression it might not be to bad.

HyperSpec wrote:displaced array n. an array which has no storage of its own, but which is instead indirected to the storage of another array, called its target, at a specified offset, in such a way that any attempt to access the displaced array implicitly references the target array.

I just profiled it and it is indeed consing a lot. Thanks for the heads up. It's still not a problem for my application, but now I'm a little worried. My MO at the moment, though, is make it concise and correct. Then, when the performance is unacceptable, profile and optimize.

ECL is a great implementation with a small footprint, but ECL can't solve fundamental CL issues. Weinreb is talking about things that are intrinsic to the language, not the implementation. The only way to solve those problems is to change the language itself.

So, maybe what we're looking for here is not an updated Common Lisp, but a different lisp. That's basically what the link to L above describes, although it is a subset of Common Lisp. Upon further review of the ILC 2009 forum article, it appears that Weinreb was not referring to small memory so much as fast startup.

Weinreb wrote:OK, I should not have put it in terms of memory. The real point is that it should be possible to write programs that start up fast. In fact, I've been having a whole conversation on another blog about this. See ...

This is getting a little confusing.

findinglisp wrote:

"Archaic naming" is a non-issue because CL has "Too many concepts", so you don't have to use the "Archaic naming". I'd like to take this opportunity to suggest starting an annual Common Lisp programming challenge along the lines of the "Obfuscated Code Contest" called the "Archaic Code Contest".

Yea, I don't have too much problem with funky naming. Every language has some funky naming and you just have to get used to it. CAR vs. FIRST isn't that much of a problem for me. The bigger problem is regular naming. For instance, I love Scheme's use of "!" to signify mutation. I'd rather have APPEND and then APPEND! rather than NCONC.

Well, that falls under the first problem he stated.

findinglisp wrote:

"Not uniformly object-oriented" has actually been a problem for me. But, compared to the conceptual limitations that I've hit my head against in other languages, I can live with it.

Yes, but that's the sort of thinking that keeps anything from improving. In other words, I don't think anybody is arguing that CL is unusable. It isn't. People use it all the time. It's a great language, with a lot of great concepts. The only thing that Weinreb is saying is that it's a great example of a language that has evolved over time and now need some clean up. In the same way that we "refactor" code over time, the same concept could be applied to languages.

I waffle on this subject. Having Common Lisp defined and static is a benefit from the perspective of not having to worry about losing the investment made in writing a library and learning the idiosyncrasies of the language. The enemy you know is always better than the enemy you don't.

On the other hand, stasis is death, and there are obviously many lessons learned and changes in the computing environment since 1994 that would benefit an updated Lisp. On reflection, what I think I would want to see is not a new Common Lisp, but a new Lisp based on Common Lisp. Two things are required for me to buy into it:

(1) A detailed design and analysis documentThe document should itemize the motivations for designing a new lisp. It should describe each problem, show specific code demonstrating the problem, a literature review of the history of the problem, solutions to the problem and the proposed solution.

The document should then describe what the language will provide. Are we talking batteries included or a tight core language? Personally, I like the idea of a tight core language with everything else as a library. It's worked well for C. But, then again, you can't argue with those that advocate the Java/Python batteries included approach. No, really, I'm serious, you can't argue with them.

Finally, the document should define the language like the HyperSpec does, but without allowing implementation defined behavior. I realize this contradicts my previous argument, but that was cast in light of I don't really think it is a problem for Common Lisp. But in the new language, might as well get rid of implementation differences to negate this argument from the outset.

(2) A reference implementationA design document is one thing, but there are complications with designs that simply don't manifest until you actually start to implement the design. So, there should be a reference implementation that evolves with the design document. Only when the implementation is fully synchronized with the design document should the new Lisp be frozen.

I could really care less if it is an ANSI standard. In fact, it would probably be better if it wasn't. I think a moderated wiki would work better. And without a standard, when the next round of miscreants surfaces griping about this new Lisp, they can fork it and run off into their corner and hack away to their hearts desire. I would argue this hasn't hurt the BSDs and in fact is beneficial when you consider the cross pollination that occurs.

The reason I set up a separate site was that I had the idea that the conference could have its own site that would act as an extended part of the conference itself. I was hoping that a bunch of active threads would be set up before the conference happened. That way, we'd have some well-developed discussions in progress that we could continue in person. For the most part, the experiment didn't work. Maybe it's just a bad idea, or maybe it could have been done more effectively.

It seems to me that to consider the next 50 years of Lisp (or even the next five years), it helps to start off by being more concrete about what we feel the problems are with where we are now. I tried to gather together all the valid complaints that I could find about Common Lisp that many people were voicing, or that were fairly fundamental.

@schoppenhauer: There are no proposed solutions because that particular thread was just about gathering up the problems in Common Lisp. There are several things one can do, and they're not mutually exclusive: find ways to improve things for Common Lisp users, and/or work on new dialects that aren't Common Lisp. So I wanted separate threads for those topics, just to keep everything straight. What you're suggesting is one approach, but once you've defined this Common-Lisp-like new dialect, then what? Try to get a lot of the existing implementers of Common Lisp (there are 11 of them) to sign up to produce it? It might be hard to get agreement. However, don't let me stop you!

Tom wrote:The document should then describe what the language will provide. Are we talking batteries included or a tight core language? Personally, I like the idea of a tight core language with everything else as a library.

I thought 'batteries included' referred to a good number of (standard)libraries. Are the batteries in Java/Python really included in the language(gasp) or are they libraries? For me a tight core language with everything else as (standard)library is just-about a requirement. Maybe even to the level that the language is not really usable without the standard libraries it comes with. It should definitely come with 'batteries included' as-in easily installed/pre-installed recommended libraries.

I agree with your requirements for buying into a new lisp, but the road to the new lisp is different; it doesn't have to satisfy the requirements immediately to be good potentially. Also some issues, like namespaces/package systems are orthogonal to the others.

Tom wrote:The document should itemize the motivations for designing a new lisp. It should describe each problem, show specific code demonstrating the problem, a literature review of the history of the problem, solutions to the problem and the proposed solution.

I am not so sure that everything should be seen as 'this is a problem with CL and this solves it'. For instance, say the new language is more general at something then CL, and shows that this isn't a problem for speed or anything else. Isn't it CL that should prove that this generality is somehow not useful, rather then that of the new language?

Tom wrote:Finally, the document should define the language like the HyperSpec does, but without allowing implementation defined behavior. I realize this contradicts my previous argument, but that was cast in light of I don't really think it is a problem for Common Lisp.

At some places i agree, but at others, having implementation defined behavior can make the implementors job or making things faster easier. The only place (i think) this happens is numbers though. I see no reason why every number type should behave exactly the same across cpus and implementations.

dlweinreb wrote:The reason I set up a separate site was that I had the idea that the conference could have its own site that would act as an extended part of the conference itself. I was hoping that a bunch of active threads would be set up before the conference happened. That way, we'd have some well-developed discussions in progress that we could continue in person. For the most part, the experiment didn't work. Maybe it's just a bad idea, or maybe it could have been done more effectively.

I guess people didn't feel like registering, making a password and all. I guess there have been many discussions outside your website. (Actually, sometimes when i want to look up a discussion i actually type "reddit " infront of the url to look up if there is a discussion on reddit.)

Why should the 11 common lisp implementors sign up to produce it? It would be great when they do, but i don't see it any reason for being a requirement.