User login

Navigation

Yearning for a practical scheme

I have spent the last year learning Common Lisp and then Scheme. Especially after watching most of the SICP videos, I am completely enamored with Scheme. It was a godsend for my natural language course last semester in which I needed my only constraint to be my imagination rather than my programming language. I am addicted. But I am also in crisis, because Scheme feels both a blessing and a curse. It's a blessing for the obvious reasons, but it's a curse because it doesn't feel like Scheme is everything I want it to be and I don't exactly know why. It feels impractical. I have some (questions about / comments on) on my experience.

First, my Scheme code feels sloppy, much more so than the mind-numbing Java I write for my research job. Is it just all of Java's boilerplateism that's giving it the illusion of better organization? Sure there's more code, but I never have to think twice about where to put that code. Maybe it's Java's rigid design that's forcing me to be more organized when I use it. In imperative languages, the creation of new procedures often feels very arbitrary. It took experience for me to learn when to excise a handful of imperative operations and move them to their own procedure. In Scheme this feels more natural... every procedure seems to map well to the conceptual operation it performs. I think this has a lot to do with functional programming's emphasis on the "computation as calculation" perspective. Because imperative languages highlight the computer's role as a storage device for program state, perhaps they don't lend well to the conceptual breakdown of code into procedures. But is there something about the view of computation as "manipulation of storage" that lends to a better conceptual breakdown when it comes to collections of related procedures? In my Java programs I always feel like there's a place for everything and everything is in its place. My Scheme programs (though I haven't written THAT many full blown programs) always end up being huge listings of procedures grouped into a single massive file... This probably has as much to do with inexperience as it does with language properties, but it also seems true that in Java I have to think about this sort of thing less. What are good organization techniques? Does anyone have advice / guidelines on breaking Scheme programs into compilation units?

Coming from the static world of C++ and Java, one of the most seductive parts of Lisp early-on was its complete type dynamism. Having the straitjacket removed felt wonderful, but as I gain more experience, I'm starting to miss the straitjacket. I know that Haskell and Ocaml are statically typed, and I've ventured a little into Haskell, but I still really like Lisp. It feels more "low-level". I like being able to use assignment if I feel like it... it seems to me that it's convenient for the human mind to model "computational objects" as little pools of shared mutable state. It's a shorthand we use also use in dealing with the real world. Even though everything is interconnected, at least at the atomic level, meaning there is no such thing as truly isolated state, we nevertheless think of the world in terms of objects. Without assignment, it seems harder to section off abstractions from each another. I guess the Haskell crowd could achieve this with monads. I'd be interested in hearing people's thoughts. Maybe this will change as I learn more about Haskell, but I still don't like it as much as I like Scheme... I don't completely know why, but I think Scheme's lack of syntax has a lot to do with it. Scheme programs look pretty and nested on the page, while Haskell looks more like a math textbook... cool in its own right, but I prefer the former as of yet... So here is my question: would it be possible to give Scheme static typing while maintaining its appeal? Has this already been done? If not, why? I haven't thought about it at length, and maybe Scheme's elegant syntax would be crushed by type annotations, but I still wonder. Why not?

Lastly, what do people think about implementations? I keep shopping around for one that I like, and I keep coming up empty. PLT falls short because it depends too much on DrScheme, and I want to use Emacs. I don't think I should have to abandon Emacs just to gain access to the stepper and interactive debugging. MzScheme's text mode debugging support seems paltry. I understand the appeal of Scheme's minimal definition, but when I'm using it for practical purposes, I don't want the API to feel tacked on. I don't want to import an object system. I want it to be there and feel like it's built-in. I don't want to load a SRFI number, I want the Scheme equivalent of saying import java.util.foo with a name that maps to functionality. I guess what I want is a Scheme that reminds me of Common Lisp while not being as ugly as Common Lisp is. I need a full-fledged toolbox, and I need it to feel seamlessly integrated. Does anyone have suggestions? If not, why doesn't such a Scheme exist?

If you read this far, thanks for your advice / comments. I have been reading and learning from this community for a long time and now I am finally a member.

It's not taken facetiously... I am somewhat intrigued by Smalltalk from what little I know about it, so maybe I'll check it out. But I was always under the impression that it was firmly in the imperative paradigm. I feel like Scheme does everything right except for the fact that it seems to encourage fragmentation into lots of tiny pieces rather than a coherent, convenient system. Thanks for the suggestion. Thanks also to the pointer to the SLIME Scheme 48 hack. Cool.

Although the most immediately obvious problem is that the UI is designed to appeal to five-year-olds, more problematic is that (as of some 12 months ago) the UI can only be created within the squeak window, as no-one seems to have created any bindings for other windowing systems, the recommended graphical toolkit has no up-to-date documentation, and the editor available is relatively primitive.

It might be possible to learn to use squeak in the context of a non-gui application, such as Seaside, though. I've never used seaside. As an aside, are there any good examples of apps made in seaside, with code available? The documentation and tutorials also don't make it clear how much is already done for the developer, for example in creating AJAX components.

Are why I'm building up Slate: UI architecture flexibility, good documentation, friendly to the command line, evolved libraries, and so on. Basically I'm taking all the good ideas in Smalltalk and Lisp systems and melding them.

nerds, geeks, and what have you, the source of postmodern hipsterism are 5 year olds via nostalgia. Who is not enamoured of cute cartoon figures and the like? I think it will be very difficult to find a programmer who will sniff at oswald the lucky rabbit and say kid's stuff instead of being captured by the pop culture buzz.

The problem with the Squeak UI is they should have tried more consciously to ape a class cartoon style. something like the old fleischer studios would make a great desktop theme I think.

I believe there are some preliminary bindings to Mac OS X's GUI as well.

There has also been significant work on a replacement for Morphic: Tweak (Morphic is the UI "designed to appeal to five-year-olds" that "has no up-to-date documentation"). Tweak also relies on extensions ("method annotations" I think) to the base language.

Squeak is a wonderful tool/environment/Smalltalk implementation. However, its GUI is so orthogonal to every other desktop I've ever used that I've never been able to do anything with it. I'm so used to text editors and code, right-clicking to bring up a context sensitive menu, Windows, KDE, vim, etc. that the whole Squeak desktop is like parachuting into a place where nobody speaks English with only $1.25.

Now I'm sure if I stuck with Squeak, I'd at least be able to function there, and there is always the risk that I'd like it so much I'd be unable to go back to my old ways. :) However, I've recently discovered (perhaps acknowledged is a better word) Ruby, and that's where I'm going.

Ruby is touted as combining the best of Java, Smalltalk, Perl and Lisp, and features closures prominently. It's object-oriented (classes/attributes/methods) rather than functional. And it's a big deal because the "agile" development crowd is moving to Ruby as a replacement for the "traditional" scripting languages Perl, PHP and Python for rapid development of web applications. If you've been under the proverbial rock, do a search for Ruby on Rails. :)

One language i truly love, given its minimalist syntax, very small core API, advanced concepts and flexibility like no other. I also like very much the parenthised syntax, since it makes it very easy to navigate throughout the code when using a decent editor like emacs.

Still, i truly miss that its specification only deals with the language itself, refusing to enforce some minimal ways for dealing with the outside world. I find it appalling that to this day and age, the only way to have any connection with the outside world, to interoperate with other systems, is via files. I want a high-level socket interfaces and perhaps some FFI wrapper. is it really asking too much.

As for organization of code, perhaps you should see modules? Unfortunately, it's not specified either, though that's about to change. Every major implementation has its own module system.

as for your comments...

"It feels impractical."

yes. absolutely. IPC now!

"my Scheme code feels sloppy"

you're a novice. you'll get better, eventually.

"My Scheme programs always end up being huge listings of procedures grouped into a single massive file"

modules. that's all. topicalization.

"type annotations, but I still wonder. Why not?"

i don't miss it. if i did, i'd be using haskell or ocaml more often.

"when I'm using it for practical purposes, I don't want the API to feel tacked on. I don't want to import an object system. I want it to be there and feel like it's built-in. ... I want the Scheme equivalent of saying import java.util.foo"

I don't think the object-systems feel tacked on. i think they feel the same as any code libraries. And they show OO is nothing special, just another useful abstraction brought into scheme given its flexible nature. They don't feel any less builtin then car or cdr, since they are all just procedures.

"I don't want to load a SRFI number, I want the Scheme equivalent of saying import java.util.foo with a name that maps to functionality."

modules give you that. SRFIs don't use a module system, because it's not a standard, but that's about to change in R6RS.

"why doesn't such a Scheme exist?"

Tt was often noted that the Lisp community is very fragmented. I wish there were some common agreed on interfaces on practical everyday programming needs. It doesn't even have to be cross-platform libs of some sort, just more like, say, Python's PEPs. For instance, both mysqldb and pygresql modules for database access provide the very same interface described in a PEP, despite their teams propably never getting on together.

One really serious difference between Java (and C++, conceptually) is in modularity. With Java, everything almost has exactly one place it can be, once you at least partially grok Java's version of OO design. Scheme, and many other languages, don't enforce those kinds of limits, and if most of one's experience is with Java or C++, that lack of enforcement would feel sloppy. On the other hand, if it did enforce something that didn't match Java, it would feel restrictive.

How much time has been spent on learning good OO Java design? Hopefully, subsequently learning good Scheme design would be easier, but the work can't totally be avoided. Until someone has a feel for what a good design looks like in an environment, it's hard to know how to organize code. And what defines a good design interacts strongly with the module system.

i've had the same experience with both ocaml and scheme. what helped me was deciding that i would program in a functional manner, and focus on the appropriate language constructs. otherwise, i got sidetracked into solving the same problem in different ways.
(credit to paul s for suggesting this, iirc)

So here is my question: would it be possible to give Scheme static typing while maintaining its appeal? Has this already been done? If not, why? I haven't thought about it at length, and maybe Scheme's elegant syntax would be crushed by type annotations, but I still wonder. Why not?

The first and second describe the brains behind MrSpidey, DrScheme's first static type-checking system; the third is the brains behind MrFlow, the newer one. Unfortunately at the moment MrSpidey has bitrotted away and MrFlow isn't quite ready for prime-time (not because it doesn't work well, but only because it doesn't handle full PLT Scheme, only R5RS + some PLT extensions, if I remember correctly). Both of these systems are so-called "soft" type systems, meaning you can run them over your Scheme program and it'll tell you about any type errors it finds, but you're allowed to run the program anyway if you're sure those type errors won't actually lead to those programs crashing.

Another approach is to use "contracts" rather than types, meaning annotations on functions that specify type-like things but are actually checked dynamically and can assign blame for failure on a particular component of the system. I think this is considered more Schemely by a lot of people, and in any event a full contract system is implemented in PLT Scheme and is used fairly heavily in PLT code.

As for implementations, I've got to say that PLT is a good choice, and you can trust me to be totally unbiased since I'm an active developer of the PLT system :). I like DrScheme and I think some of its features are worth putting up with its deficiencies with respect to emacs, but if you don't you can always use Niel van Dyke's Quack package to give yourself a PLT Scheme editing mode in emacs.

I don't want the API to feel tacked on. I don't want to import an object system. I want it to be there and feel like it's built-in.

PLT's object system feels quite "built-in" to me even though I've got to say (require (lib "class.ss")) to get at it, considering that the entire graphics system is implemented in it. I think this objection would go away very quickly if you started using the system in earnest.

I don't want to load a SRFI number, I want the Scheme equivalent of saying import java.util.foo with a name that maps to functionality.

PLT Scheme actually allows you to refer to SRFIs by names rather than numbers (e.g. (require (lib "list.ss" "srfi")) rather than (require (lib "1.ss" "srfi"))), though I would discourage that practice; the names are made up by us whereas the numbers are standard.

In any event, if you find yourself always loading particular libraries, it's easy enough to build your own language that automatically imports those things and either (1) use it as the base language for modules you write or (2) make it a language level for DrScheme.

Thanks for your advice... I had seen MrSpidey when I was just getting started learning Scheme (with PLT) but I didn't really understand what it did back then. I like the idea of a "soft" type system... does MrFlow accept type annotations like Haskell?

But as far as DrScheme, I can't deal with it. I tried. I don't like it. And then I was frustrated that so many of the things I wanted to do (such as debug) required it. If PLT Scheme had better separation of language features from IDE or turned DrScheme into a super-cool Emacs replacement with Scheme scripting in place of warty Elisp I'd probably go back, but as of now I'm looking at Gauche and Chicken. Not meaning this to be a rant or anything... I'm certainly not programming any Scheme implementations just yet (although after I finish EOPL, maybe?). This is more just feedback.

Not at the moment, but I believe there's active development on making it understand the type-implications of user-defined contracts. Under that system, if you gave a function the contract (integer -> integer), MrFlow would understand that, and try to prove away the necessity for inserting any contract checks in the generated code or find a circumstance where it appears that the function is used in a way that's not consistent with the contract.

I was frustrated that so many of the things I wanted to do (such as debug) required [DrScheme].

Actually that's not true, though that may not be very well known. mzscheme -M errortrace [other mzscheme arguments afterwards ...] gives you the same error tracing facilities that DrScheme uses. In general you only need DrScheme for things that don't really make sense without a GUI, like the check-syntax binding-occurence/bound-occurence arrows.

First of all, if you have any API dependencies on Java, your best bet is SISC: http://sisc.sf.net

SISC is fully R5RS compliant, supports the full numeric tower, tail-call optimisation, all of that beautiful stuff. It has an *extensive* support for bringing Java objects and methods into the Scheme world and though it's somewhat more cumbersome to use than cute little hacks like JScheme's JavaDot, too much (syntactic) sugar is bad for you.

Secondly, everyone's Scheme code is sloppy at first, and it tends to be sloppy in certain standard ways, unless you're like someone I know whose first exposure to programming was taking the SICP course at MIT. This is because we're all used to the standard imperative model: allocating space for variables, changing their values, etc. Scheme is like a whole different paradigm. While the ability to mutate stuff is convenient in some cases, as someone once told me a good Schemer should cringe whenever he types an exclamation point. Or, less radically, when programming Scheme you should learn to economise your exclamation points. Thinking functionally helps you understand your programs a whole lot more clearly, including when and how to bend the rules.

Your initial difficulties with where to put the code were something I experienced also, in a different language: Smalltalk. I had gotten used to thinking functionally that when it came time to encapsulate things into objects, the lines became fuzzy on which objects should have which methods. So I just practiced and things got a little clearer with time.

I'm in basically the same boat. Currently I'm using Chicken Scheme, which I can deal with, but, uhh... Well, it can be frustrating sometimes.

I love Scheme and am using it successfully, but I strongly believe that ultimately dynamic typing is very much the wrong solution. All I can say is wait a few years. As soon as I make enough money that I can afford to spend significant amounts of coding time on something not likely to make me any money, I plan on designing and implementing a Lispy/Schemey but statically typed language with type inference, variant types, pattern matching, a cool system for accomplishing what OO accomplishes, modules, an interface to C, standard interfaces for sockets & threads & GUIs, etc... and a text editor like Emacs but using this language rather than elisp. Say what? You can't afford to wait around for years in hopes that some random guy from the web creates a great language? Oh...

mike: I plan on designing and implementing a Lispy/Schemey but statically typed language with type inference, variant types, pattern matching, a cool system for accomplishing what OO accomplishes, modules, an interface to C, standard interfaces for sockets & threads & GUIs, etc... and a text editor like Emacs but using this language rather than elisp.

"Lispy/Schemey:" Check. O'Caml, like Lisp, is an impure functional language. Also like Lisp, it has extremely powerful syntactic extension mechanisms.

"Statically typed with type inference:" Check. Like all members of the ML family.

"Variant types:" Check. In two flavors, traditional and polymorphic.

"Pattern matching:" Check. Again, like all members of the ML family.

"A cool system for accomplishing what OO accomplishes:" Check. O'Caml includes an object system with either classes or immediate construction, multiple inheritance, and a clear distinction between inheritance and subtyping. O'Caml remains, as of this writing, the only type-inferred statically-typed object/functional language to have escaped the lab (cf. O'Haskell).

"Modules:" Check. Again, very much like the rest of the ML family. Modules, functors, and recursive modules are all there.

"An interface to C:" Check. Actually, there are several.

"Standard interfaces for sockets & threads & GUIs, etc:" Check, check, check. Sockets and threads are in the standard libraries; Tk support is in the standard libraries and bindings to Gtk+, COM, Win32... are available from third parties.

"A text editor like Emacs but using this language rather than elisp:" Check. That would be Efuns, linked above.

Let me also mention that O'Caml has an interactive toplevel, a bytecode compiler with a time-travel debugger, and a native-code compiler with a profiler. Generally speaking, O'Caml is an excellent performer, i.e. native code comes within epsilon of the equivalent C++ code. As Graydon Hoare says in this slide:

With respect to native integers, while it's true that "int" is a tagged 31-bit value on 32-bit platforms, we should point out that O'Caml also has an "int32" type with its associated functions in the Int32 module. For most purposes, just doing "open Int32" will be sufficient. Also remember that appending "l" to constants will make them int32:

Both of these functions assume an "open Int32" has brought various functions on int32s or returning int32s (of_int, shift_left, logor, etc.) into scope.

Also, O'Caml is explicitly designed to support high-performance scientific computing: records and arrays consisting entirely of floats store their values unboxed, and the Bigarray module supports multidimensional unboxed arrays of ints or floats laid out in either C or FORTRAN style so that they can be passed directly to existing C or FORTRAN code, a fact that the bindings to popular libraries such as BLAS and LAPACK exploit.

The lack of type classes in the ML family is, as I think you alluded to, often dealt with by the use of phantom types. In O'Caml, polymorphic variants are frequently employed as an alternative to phantom types.

Spend more time translating CTM, but I'm in the middle of section 2.2 for SICP. Section 2.3 has been on the back of my mind though, and since it has to do with quoting, so perhaps I should be an opportunist and take advantage of the current line of thought.

From 2.3.1, SICP gives the following Quotation example:

(define a 1)
(define b 2)
(list a b)
(1 2)
(list 'a 'b)
(a b)

So what's the equivalent in an ML language?

val a = 1
val b = 1;
[a, b];
???

Thanks.

And I'll try not to ask too many more of these language specific types of questions here, lest Ehud loses patience. :-)

Yeah, sorry not to be clear. "<:expr< [a; b] >>" is the actual quotation. The rest is the result of that quotation, which is dominated by the repetition of the dummy source-code location. If you manage to ignore that, you can see that the AST is basically formed from simple type-constructor/token pairs.

As you might have guessed, .! (pronounced `run') is akin
to eval of Scheme or Lisp (only it is far better defined). The run
construct also works in the compiled code. It essentially does
run-time code compilation and linking in. There are versions of.! that translate the code into C or Fortran, compile,
and link it in. That C or Fortran code is of course usable on its own
(so it can be saved in a file and made a part of a library).

in MetaOcaml or Template Haskell are very primitive. Of course that these languages have some advantages over Lisp (or Common Lisp in this case), but I would really not put these metaprogramming systems and CL's defmacro on the same level. Just try to implement something like this in MetaOcaml or Template Haskell:

The advantage of Lisp is representing data and code the same way *by default*, and while you surely can implement something like this in other languages, it would be a orders of magnitude less readable.

citylight: You've really sold me on O'Caml there. Are there any catches I should know about?

Sure: if you don't already know any member of the ML family, O'Caml seems really, really weird at first. Some of its syntactic differences from Standard ML are only there for Hysterical Raisins. Function application precedence rules mean that you end up parenthesizing (nested (function applications)) a lot. A lot of O'Caml programmers don't seem to find the standard libraries rich enough. I disagree with them, but it's something you'll hear about if you hang around the community.

citylight: Can you really do quoting and evaluating of syntax like you can in Scheme/LISP? I'm curious as to how that could work without S-expressions... I guess I'll go have a look.

Yes, using the Caml Pre-Processor and Pretty-Printer, known as camlp4, which is included in the O'Caml distribution. There's a very good tutorial on it, too. Basically, it separates the parsing task out from the rest of the compiler and gives you tools for concrete parsing to an AST, and tools for manipulating that AST, including quotation, quasiquotation, and antiquotation, just like Lisp. In addition, unlike Lisp, you can quote any language that you have a printer for. Graydon Hoare wrote a C quoting printer, Cquot, which he used in his One-Day Compilers presentation, in which he implements a very simple DSL by way of camlp4 and Cquot. In other words, his DSL gets compiled as an O'Caml program that prints C source code, the O'Caml program is run, the resulting C source code is compiled with gcc, and the resulting C binary does what the DSL code said to do.

In my experience, it is a pain to set up. I got it sorta working under Windows once, but then when I went to try to use the OpenGL/TK bindings there were bugs that I never got resolved (in particular, it wasn't getting some kind of mouse events, IIRC).

The second time, just recently, was I think on my Debian/Ubuntu Gnu/Linux box at home. It is a PPC machine, and old Mac. Those details might explain why O'Caml couldn't even load any of the 'standard' libraries.

Maybe blame the installer rather than the language,
but the whole gestalt has been very frustrating. And even if I got it to work, what if I wanted to give my code to somebody else to use? With Perl or Java (I shudder to think of using either, really) at least it is a lot more likely they will be able to use what I've written.

Even though everyone loves O'Caml and says it's the answer and what you're looking for, I still sympathize with your desire to make such a Lisp. There's just something innately appealing to me about S-expressions. They feel good.

Thus, if you're going to talk about "s-exprs" or "code-as-data", you better well first specify what exactly you mean by "s-exprs" and where they're used.

Anyways, my point is two-fold:

- As external representation, I shouldn't care at all how s-exprs are implemented internaly. It's just convenient syntax.

- As internal representation, linked lists do not work. Do I need to explain exactly why?

- Tying in with the previous point, if your computation model is based on munging linked lists, then your language has serious design problems. (Again, for reasons I shouldn't have to spell out, I hope.)

Yes, if you want to be taken seriously. As I explained before, this isn't a forum for "opinions". It is a forum for informed professional discussion. If you don't have any new ideas or results (or links to new research etc.) to add to the discussion, simply mentioning your opinion is a waste of time.

I ask you personally to keep this in mind. The recent threads you participated in resemble trolls, and do not live up to the standard of discussion we like to have around here.

Lack of comprehension on the part of some readers does not make a "troll"; in any case, I realize that the backlash simply comes from the fact that I had the gall to badmouth Lisp. (Making fun of Java never earns someone a "troll" label, for some reason.)

Back to the topic, though: all modern languages use either parse trees or random-access arrays for internal representation. Using linked lists is stupid because they are equivalent to trees and arrays but with demonstrably poorer algorithmic characteristics. (Precisely because they are "linked", i.e. linear.)

Using linked lists for a computation model is likewise a poor design choice because you will then need a supporting runtime for memory management that is implemented in some other language. In other words, your language isn't self-supporting.

For example, there are Prolog implementations that do not require a garbage collector. (I cite this not to advocate any particular language, but to point out that proper design decouples the computation model from the supporting runtime.)

In the original message I complained about the fact that Lisp uses the same data structure for parsing and for runtime representation.

Now this is easily fixed in the obvious ways, (just use two different structures) but such a language wouldn't be Lisp anymore. (Because you couldn't use 'cons' and 'cdr' and friends for modifying runtime structures.)

Which is why I complained about the existence of 'cons', 'car' and 'cdr' in the first place.

Thanks for the link, by the way. I haven't had the time to read it through yet, but a much more interesting question to me is whether or not memory management can be done statically at compile time, and if yes -- to which degree and under which conditions.

tkatchev: Thanks for the link, by the way. I haven't had the time to read it through yet, but a much more interesting question to me is whether or not memory management can be done statically at compile time, and if yes -- to which degree and under which conditions.

See the MLKit as a good launching-off point, then Google "region inference." The executive summary seems to be that region inference can work in all cases in the simply-typed lambda calculus with let polymorphism, and that implementation in the full Standard ML is a conservative extension. The downside seems to be that, using region inference alone, memory consumption can go to 5x what is actually required by the algorithm(s) in question, so for memory-constrained environments, a combination of region inference and real-time GC seems to be called for.

Actually, the real question is how space complexity for static memory allocation compares to a runtime GC.

I hope they are big-O comparable, but a paper linked to on the MLKit site states otherwise:

As mentioned in the preface, the present version of the ML Kit supports reference-tracing garbage collection in combination with region memory management [Hal99]. While most deallocations can be effciently performed by
region deallocation, there are some uses of memory for which it is diffcult to predict when memory can be deallocated.

Linear logic has been proposed as one solution to the problem of garbage collection and providing efficient "update-in-place" capabilities within a more functional language. Linear logic conserves accessibility, and hence provides a mechanical metaphor which is more appropriate for a distributed-memory parallel processor in which copying is explicit. However, linear logic's lack of sharing may introduce significant inefficiencies of its own.

We show an efficient implementation of linear logic called Linear Lisp that runs within a constant factor of non-linear logic. This Linear Lisp allows RPLACX operations, and manages storage as safely as a non-linear Lisp, but does not need a garbage collector. Since it offers assignments but no sharing, it occupies a twilight zone between functional languages and imperative languages. Our Linear Lisp Machine offers many of the same capabilities as combinator/graph reduction machines, but without their copying and garbage collection problems.

if your reference to the "computational model" of lisp (which lisp, btw?) is meant to mean the lambda calculus, then even numerical datatypes and cons-cells are "opaque objects" in lisp.

(as far as i now this is even more true for haskell, where 1 is a function that yields a value representing 1...).

if your criticism is targeted at s-expr then #(1 2 3) should be a sufficient answer ;)

ps: regarding lambda-calculus when was the last time you have seen anybody programming a turing maschine, which is the computational modell of most imperative languages? (this is just to show that this comparison is not buying you anything ;) )

Thus, you're using closures as cons-cells. Fascinating idea, but I'm still not sure whether this ultimately affects anything. (Could you build more complicated data structures on top of closures? I'm guessing that yes, you could.)

a turing maschine, which is the computational modell of most imperative languages

Last time I checked, TM calculated a function from (finite) bitstring to bitstring. What is so imperative in that? There are some attempts to stretch the definition to cover a function from (infinite) bitstream to bitstream, but I still fail to see how it is imperative.

Not true. Most imperative languages model some sort register machine, and register machines have (conceivably) different complexity characteristics from Turing machines. Moreover, even different flavors of register machines have different complexity characteristics, so this is a difficult question that you cannot just dismiss so bluntly.

Back to Lisp, though: I've a hard time buying the notion that Lisp models the lambda calculus; at the very least, I don't see how cons-cells and lambda calculus relate to each other.

In short, my criticism of Lisp boils down to the opinion that any computational model based on cons-cells is broken in many ways. Get rid of cons-cells and I'd take my words back; but this brings up the question of how you can have any sort of Lisp without cons-cells.

O(1) is indeed a common shortcut for saying "in constant time". However, the issue tkatchev seems to want to discuss is not the access to cons cells but rather the access to lists. And, it is true that a list of length n, accessing an arbitrary element is *not* O(1).

BUT: As was clearly explained here time and again, Scheme had vectors, and Lisp has it's version of constant time vectors, so the fact that lists behave differently is besides the point.

Your post amounts to the following: Lisp and Scheme are front-ends to C with annoying syntax.

Fine, if you find that sort of thing exciting, I can understand that. Me, personally, I don't want nor need the existence of a front-end to C with s-expr syntax, I find coding straight C comfortable enough as it is.

The crux: whether Lisp is simply syntactic sugar for list manipulation for those who love parentheses, or whether there is really something fundamentally useful in Lisp after all.

I really tried to keep up with the rules for LtU, but if even Ehud can't stop feeding a troll, neither can I.

"Lisp and Scheme are front-ends to C with annoying syntax."

And you seem to be a front-end to a boring, tireless Turing-machine with really annoying syntax and semantics.

You whole posts against Lisp come down to this: you don't like the syntax. That's it. And here we are wasting our time debating why you should like Lisp's s-exprs...

"I find coding straight C comfortable enough as it is."

So, there you are. You dislike it so much you feel it's fine to give up all its power, flexibility and high level programming in favor of a small, ancient, severely limited imperative language just in order to have O(1) access with a simple syntax like a[number].

yep, that sounds reasonable and very edifying...

"whether there is really something fundamentally useful in Lisp after all."

I take it for granted that just its lifespan and influence over the industry are proof to that. Of course, the same can be said of C. Still, like Paul Graham noticed, languages have been progressively adopted the Lisp model. And like Guy Steele, noticed, Java is halfway there from C++...

rant off. I will ignore your posts about your dislike for Lisp's syntax from now on. Have a nice day.

Your post amounts to the following: Lisp and Scheme are front-ends to C with annoying syntax.
Fine, if you find that sort of thing exciting, I can understand that. Me, personally, I don't want nor need the existence of a front-end to C with s-expr syntax, I find coding straight C comfortable enough as it is.

Note that what he actually said is quite different from what you CLAIM he said.

Unfortunately when people cannot follow WHO says WHAT, then misunderstanding is bound to follow.

Points 3 and 4, but with the following qualifications: a) you don't need lambda calculus for expressing the Lisp computational model and b) you can't talk about the complexity of car/cdr/cons, since complexity is a property of an algorithm being executed on a computational model, and not of the computational model per se.

As for answering the question of "which Lisp" -- to do that, you'd need a more strict definition of what the Lisp computational model actually implies. This is a fault of the "Lisp culture"; Prolog or C programmers have no problem in that regard.

If you are familiar with Java and you like Scheme I think you might like Kawa very much: http://www.gnu.org/software/kawa/

I have used it in a few commercial projects with excellent results.
By the way, although you can use it in an interactive REPL (as any other Scheme implementation), Kawa is actually a compiler which generates JVM byte codes, and it even allows you to optionally declare the types of variables to generate code which is practicaly indentical to the one you would get by writing in Java and using javac.

"I don't want to load a SRFI number, I want the Scheme equivalent of saying import java.util.foo with a name that maps to functionality."

Um...I'm not sure what you mean by this. Java's import command is just syntactic sugar to keep you from having to fully qualify all identifiers. It doesn't really do anything except tell the compiler where to look for any unqualified identifiers.

I fail to see any significant difference between Java's import & loading an SRFI.

"I don't want to import an object system. I want it to be there and feel like it's built-in."

Then I wouldn't think you'd want Scheme. There are plenty of languages that force a specific object model on you.

Scheme's basic philosophy seems to be to provide little more than the primitives needed to add whatever you want to the language. I can't tell you how much time I wasted trying to shoehorn alternate object model onto C++ or Java for specific purposes. Scheme makes it much easier.

(I actually like C++'s object model a lot, but I don't really consider it general purpose. There are some things I'd choose it for, but many things I wouldn't.)

"I understand the appeal of Scheme's minimal definition, but when I'm using it for practical purposes, I don't want the API to feel tacked on. [...] I guess what I want is a Scheme that reminds me of Common Lisp while not being as ugly as Common Lisp is. I need a full-fledged toolbox, and I need it to feel seamlessly integrated."

Well, although CL may come with a huge standard library out-of-the-box, that ugliness makes it feel tacked on to me. Many of the non-standard libraries I've used with Scheme feel much less tacked on.

Having tried both, for the moment, I'm preferring the con of not-a-full-standard-library to the con of a-really-ugly-standard-library.

So far, for the things I've been doing, putting together an implementation & specific libraries to get practical work done has been relatively painless.

"Lastly, what do people think about implementations? I keep shopping around for one that I like, and I keep coming up empty."

Good question.

When I first tried Scheme, I used guile, since it was already installed on my workstation. This time I choose scsh because I'm most interesting in finding a replacement for perl & ruby.

And, I tried really hard to like Emacs but failed.

I do wonder about the scalability of CL or Scheme along the number-of-coders axis. When you have 20+ programmers working on code that was originally written by a separate group of 20+ programmers... The limitations of a language like Java might actually be a benefit in these environments. On the other hand, if CL or Scheme really makes programmers more productive...

Yes... I suppose it was a small nitpick but I agree. It's a small issue, but usability seems often to be in the details. I just wish that dealing with the outside universe in Scheme could feel as natural to me as the rest of the language, which I love.

When I first tried Scheme, I used guile, since it was already installed on my workstation. This time I choose scsh because I'm most interesting in finding a replacement for perl & ruby.

I've also been searching for a really good, really fast implementation for OS X/darwin PPC. On my x86 machines I just use MIT Scheme because I never do much non-trivial development on them, but I'm currently on the market for a sexy and fast, R5RS-compliant Scheme that'll compile on OS X.

And, I tried really hard to like Emacs but failed.

Emacs is strange. I've been trying to learn how to use it for a few years now--I even used the much-vaunted SLIME mode when I was teaching myself Common Lisp--and I always find myself going back to TextWrangler or BBEdit or metapad (whatever's installed on the system I'm using)... it seems that once I have a mastery of it it'll be the best editor on Earth, but so far it has twarted my attempts at every turn. What is the best way to learn Emacs? is there some trick I'm missing?

(Not to get off topic too much, but) Emacs comes with a built-in tutorial. I used it when I first starting learning Emacs and I thought it helped. It is described in many web pages, e.g.: this one care of somebody at U Chicago. Basically, you press the Esc key, then x, then type in "help-with-tutorial" and the Enter key (in Emacs speak, that is all "Meta-x help-with-tutorial"), which will launch the tutorial for you.

I've found the key to Emacs fluency (and Lisp fluency via Emacs) is relentless customization and automation. Emacs is designed to be extended and made more friendly in baby steps far more than any other editor I've used. Along these lines, you should spend quality time with:

"C-h v" (apropos-variable) and "C-h a" (apropos) to look for features you might not know about;

"C-h k" (describe-key) to find the function names for interactive commands, so you can combine frequent sequences into your own functions;

"M-x find-function" to see how editing commands are implemented, a wonderful repository of Elisp (and C) best practices.

We were discussing this around the office today and I think the most important thing is vocabulary. If at any given time there are a couple of new Emacs commands that you're forcing yourself to use then over time you will become fluent. The best way to discover good commands to learn is by watching over people's shoulders and the second best is probably using keywiz.el.

Here are some life-improving commands: beginning-of-line, exchange-point-and-mark, forward-sexp, transpose-chars, ediff-revision. It's also worth learning these alternatives to harder-to-reach keys: tab = C-i, return = C-m, forward = C-f, backward = C-b, down = C-n, up = C-p, backspace = C-h (requires config). This seems really sick to most people but the ones who practice it love it -- and that's best kind of thing to learn!

Bigloo looks really cool. I noticed that the current version hasn't been updated since about a year ago. Is that because it is really stable? Are you (rkb) an experience Bigloo user, and, if so, can you summarize your opinion?

Then, I installed bigloo with apt-get install bigloo, and compiled my program with bigloo hello-world.scm, which generated a 7416-byte binary (that's on a 64-bit OS; the 32-bit version might be smaller). I ran it, and it printed the following message:

Hello, World! Apparently, Luke is unlucky! ;oP

There you have it - computers don't lie!

Keep in mind, though, that Bigloo compiles through C, so one way problems could occur is if your C environment isn't set up correctly, or isn't compatible with Bigloo for some reason. I haven't encountered that myself, I'm just speculating as to the reason for your luck.

PLT falls short because it depends too much on DrScheme, and I want to use Emacs.

I use emacs with DrScheme all the time. I typically have a top-level file, say top.scm, which I open in DrScheme. This file requires the other modules as needed. When I need to edit one of the other files, I do it in emacs, save the file, and run top.scm again. Voila!

Once in a while I'll need to look at the cross-referencing, etc, from check-syntax, in which case I'll open the needed file in a new DrScheme tab. But I always edit the file in emacs.

But also stick with Haskell. Start reading papers with results in Haskell. Figure out the typeclass system. Figure out functional dependencies. Figure out monads.

If you are finding Haskell "hard" then that should tell you that you are learning a lot when you are learning Haskell. That means you should do more of it. Check out Qi along with it. I personally find the dual language approach best.