secondly, give me an example of "a new source of abstractions." Again, keep it simple. I'm a below average engineer, not a theorist, remember.

Having moved to CL from ML in many ways, one of the things I find myself wanting to do every once in a while is using types as a method of abstraction, in particular what ML terms algebraic data types (or at least that's how I was taught it).

So a good example of this functionality already in lisp is the list, where you have either a cons cell or nil. This is in many ways a conceptual abstraction in order to better reason about lists. Now one of the things you can do in ML is declare new constructors, so that you can define a tree for example, in the same way. It can be a Node, which has 3 fields in it (a left branch, a value, and a right branch), or a Leaf which has just a value. Now there are many other ways to represent this (such as just having a leaf node be one in which both branches are nil, a la the common c++ definition, or a list of lists a la what i assume is common in lisp systems) but once I saw this other method, I always found it more intuitive.

Now that being said, it is possible to some extent to mimic the dynamic behavior of such constructs by writing the predicates by hand and/or using the deftype form (i believe there are asdf-installable packages which also bundle the work into a couple macros). However, there's a few reasons why I always feel put off by this approach. One main one is that I really don't understand deftype... I keep reading the hyperspec definitions and coming away still not knowing how to correctly use the form. The other reason is that type decls, which can be annoying to include in the first place, are also allowed to be ignored by the complier in implementation dependent ways, which makes the whole behavior questionable when migrating systems.

That being said, there's arguments both for and against static type systems, a la ML / Haskell vs a dynamic type system like lisp. The particular win i see for lisp is exploratory programming is almost prohibitive in a static type system for any problem which would incorporate meaningful type abstractions. However, when not performing exploratory programming (which admittedly is my preference) compile time static type inference provides alot of useful information for debugging purposes, without having to engage in the tedium of using type declarations everywhere, which also many times I feel like I don't understand.

So does lisp need a better type system? In my opinion, there are type constructs that can be useful for thinking about problems, and type inference can help in debugging. But much of that is a personal choice, some people view one or both of those as a hurdle to effective software development. A different type system is one option, though in many cases I'd settle for better documentation of the one already existing. I just never did find it clear.

findinglisp wrote:Whenever I read anything about type systems on Lambda the Ultimate or other more theoretically-oriented sites, my eyes just glaze over and and I start drooling.

Well, yeah. That's because LtU people are off in the future, working on type systems that have powerful capabilities, but haven't yet been reduced a form that one can use and understand without learning category theory first. If you want a feel for what a more advanced type system can do for you, go play with some form of ML. It's not as far along as dependent types and the other drooligens you see on LtU, but it's much nicer than the sorts of things you see in more mainstream languages.

findinglisp wrote:Whenever I read anything about type systems on Lambda the Ultimate or other more theoretically-oriented sites, my eyes just glaze over and and I start drooling.

Well, yeah. That's because LtU people are off in the future, working on type systems that have powerful capabilities, but haven't yet been reduced a form that one can use and understand without learning category theory first. If you want a feel for what a more advanced type system can do for you, go play with some form of ML. It's not as far along as dependent types and the other drooligens you see on LtU, but it's much nicer than the sorts of things you see in more mainstream languages.

Good suggestion. Thanks, I'll check out ML. That's one language I have never looked at in the slightest.

DEFTYPE cannot specify a recursive type, and no SIMPLE-LIST type (where CAR would always be of a specific type).

Arrays of element type NIL are strings.

Incompabtible re-declamation of a global type of ftype business as usual. No need to forbid it, but requiring consistency *across* different compilation units and a full warning on redefinition would be good.

No standard way to extend proper compilation units across files. (WITH-COMPILATION-UNIT deals with warnings, and implementations are not allowed to make it imply other things for multi-file scope without using a non-standard keyword argument.)

Subclasses are allowed to restrict their slot types.

Non-monotonic class linearization.

No sealing.

No way to define funcallable structures (or structures with other metaclasses.)

No standard environment access to declared types.

No CONSTANT-VALUE. (Extract the value of form for which CONSTANTP is true.)

No standard way to write optimizers for post-source stages of compilation, where type-inference information is available. (Moral equivalents of SBCL's DEFTRANSFORM and DEFINE-OPTIMIZER DERIVE-TYPE.)

Some of these can be deal with in user-space, some with implementation specific extension -- but some things like subclasses restricting their slot types, and non-monotonicity cannot (Well, they can if you reimplement chunks of CLOS, but that's like saying you can do closures in C). ...how serious such omissions are, is another question entirely.