I started a big project at work using Common Lisp in 2017 and could not be happier. Sure, most nice features have trickled down to other languages, but they are rarely as nicely integrated. And Lisp still has many advantages that are not found elsewhere: Unmatched stability, on-demand performance, tunable compiler, CLOS, condition system, and Macros to name a few. It has its warts too but which language does not?

I found lack of high quality library documentation a bit annoying, but a non-issue, there were tests and/or examples included in practically all of the libraries I have used so far.

Lastly, this rarely gets brought up, but I think Common Lisp has some of the best books available out of any programming language. The fact that it is so stable means that most of material, and code, from the end of 80's and 90's is quite relevant today, and new stuff is being written.

The biggest downside is that it makes JavaScript and Python revolting to work with. But I can still enjoy SML for example.

I have used both in production setting and can say that the tooling for most CL implementations is just plain light years ahead of Clojure, and there is no sign of it really improving.

For Clojure the interactive debugging experience is just plain dreadful and for a dynamic language this is pants on head crazy imo. For me a dynamic language has to have a good interactive debugging experience because you have foregone the support of a static compilation and instead wish to reason about your program at runtime. But with the JVM stack traces and lack of interactive debugger Clojure just does not support you in this regard. And as a side note the lack of an identity print is annoying as well e.g. (+ (print 4) (print 4)) "4" "4" 8 (yes, it's easy to add...)

Clojure's decision to use persistent data structures is good but it's really the only standout thing for me other than some syntax sugar like hash maps.

The JVM indeed quickly got me disinterested in Clojure. However, I have similar issues with other (free) Lisps, with probably only Emacs Lisp being the exception.

As a Smalltalker ("the other heroin of the programming world") I'm used to a highly integrated and responsive development environment; from what I heard from other Lispers, the quick feedback and gradual building up of your program is a shared aspect, but the thing that Smalltalkers always bring up and Lispers less so is also that your program and your IDE are basically indistinguishable, which is quite powerful because it makes it trivial to adapt your IDE to your project as you go. In the Lisp environments I've tried - latest being Emacs+SBLC - the split between editor and REPL seems unnatural to me, more so because in the standard case you work with two different dialects of the language. It seems that only ACL and LW have more unification.

Am I wrong and are the $$$$ versions not better in that respect and should I just drop my Smalltalk habits/arguments when learning CL?

Lisp provides that,too. But you can also deliver programs with the IDE and much of the development tools removed.

Examples for integrated IDEs:

Allegro CL on Windows/Unix+Gtk, Clozure CL (free) on Macs, LispWorks on Windows/Macs/Unix+GTK/Unix+Motif. There the IDE and the user code runs in one Lisp. Other examples: Lisp Machines, CMUCL on X11, ... there is also the McCLIM project where some ideas from the Symbolics GUI are used.

There are also countless other implementations from the past, which are now mostly forgotten, which had an integrated IDE (Golden Common Lisp for Windows, Corman Lisp for Windows, Macintosh Common Lisp, Medley, Open Genera for X11...),

What is the difference between introspective and reflective? I've used introspection in Python and know about and used (a bit) reflection in Java earlier, but thought they were roughly the same sort of thing. Please explain the difference if possible.

Introspection means that the program's structures and procedures are discoverable and we can query about them. Typical questions we might want to have answers for:

* find me the class, typically by name or by some relationship

* what are the fields of that class?

* find me a function

* what are the arguments of the function? does it have documentation? Who wrote it? Where is its source? What values does it return?

* where is the function used? what functions does it call?

* what are the values of a field of some object? What class does it have?

This and more for example enables you to inspect a program at runtime and find out what it does, how it does it and what its state currently is.

Reflection means that elements of the programming language are itself exposed and we can change/extend them.

Two examples of the Lisp world. Very unusual is the reflective tower, where a Lisp program is run by some kind of machine, which for example is a Lisp interpreter. This Lisp interpreter is itself a Lisp program. Thus you can not only write the program, but you can change/extend the interpreter running the program. A certain Lisp dialect allowed also to look at the interpreter running the interpreter running the interpreter running the interpreter ...

In Common Lisp a typical form of reflection is the Meta-Object Protocol of CLOS. It allows you to program the object system to implement new variants: persistent objects, transactions over objects, different inheritance mechanisms, classes which record their instances, ... Thus the MOP exposes classes, methods, generic functions, slot descriptors, ... as CLOS classes and methods - and protocols about them. Thus CLOS can be programmed in itself. Even at runtime.

More primitive reflection would allow you to create/change/remove things like user classes and user methods. Task: create a new subclass of an existing class at runtime.

Interesting, and thanks for the answer. It's a while ago, but when I used reflection in Java (in v1.4, IIRC), what it did seemed like some of the capabilities you describe under introspection above. Maybe they just used a different term for it than you do. Example: Finding the methods of a class and calling them dynamically at runtime, finding the number and types of the arguments of a method, etc. These capabilities were part of the java.reflect* package, IIRC.

> Introspection should not be confused with reflection, which goes a step further and is the ability for a program to manipulate the values, meta-data, properties and/or functions of an object at runtime.

You might find MIT Scheme interesting, because it has a built-in Emacs clone. Just call (edit) and you're in.

I think what it really comes down to is that writing and maintaining an editor (especially a good one) is a very large task. A language that lets you use an editor you already like thus has a double advantage, with the possible downside that the integration won't be as good as possible. SLIME is pretty nice though.

To debug a function, yes, you need to eval the form using instrumentation. Then the debugger starts when this form is evaluated. This is the the only major difference to what SLIME provides. Adding breakpoints is somewhat easier with cider, you don't need to call break but you can just add a breakpoint to any sexp by pressing b.

I'm not OP, but there are a few I could see some mentioned in the OP's post in passing.

* The common lisp condition system [3, 0] is a way better way to do errors.

* Tune-able compilers [1] allow you to make better trade offs than even gcc allows for c, let alone Clojure's very opaque compiler.

* CLOS [2] is a very powerful object system which clojure does not replicate (they have multi methods, and java classes, but not a meta class system; think python's meta class features but better (because of macros and compiler access) and with multi methods designed in).

* Nice native interoperability (Clojure generally being on the JVM, and common lisp capable of direct calls and memory manipulation).

That being said. Clojure's unparalleled concurrency features are amazing. I would compare them as: if Clojure is the Java of lisps, then Common Lisp is the C++ of lisps.

Clojure variants are incompatible in various basic ways, since they use the host language for various things. The underlying platform also restricts what the implementation provides. No TCO in JVM -> no TCO in Clojure. Numbers are internally floats in Javascript -> Numbers are internally floats in ClojureScript and use Javascript semantics, not Clojure-Semantics.

Clojure:

Clojure 1.8.0
user=> (/ 3 4)
3/4

ClojureScript

cljs.user=> (/ 3 4)
0.75

Looks like these are different languages...

Common Lisp implementations OTOH implement most of the standard. There is also more choice in implementations:

Obviously highly opinionated and personal statement coming, but Common Lisp is to Clojure what Meccano is to LEGO, and I have never been a LEGO person.

If I work on a project where functional programming with emphasis on purity is required I will much rather go to ML family, which I think offers more natural way to express ideas in functional paradigm with beauty and integrity not even Lisp can match.

I write portable code and use libraries which abstract away implementation specific stuff so it will be easy to run on any other conforming implementation should the need arise.

The community is quite responsible when it comes to portability and utility libraries which help with it, but on top of that Quicklisp maintainer tests library compilation on most major implementations and reports bugs.

It baffles me that this notion of Lisp as an interpreted language still persists. Common Lisp is a compiled language for all non toy implementations, and in general Lisps have had compilers for decades.

> You don't need an interpreter for that. SBCL (as does other compilers) always compiles an expression prior to executing it, even when it's code as data passed to EVAL.

True, but that's not what a Lisp interpreter does and provides.

I was talking about the difference between an interpreter and a (possibly interactive) compiler. The interpreter works over source code as Lisp data. The compiler does not - it does not matter that the compiler compiles individual forms. At runtime the code is machine code. With an interpreter the code is Lisp data.

Think about it: What difference could that make, if the code gets actually executed by a real Lisp interpreter?

ecl is similar, but designed to be easily embeddable in C applications

clasp is a newer one that's built on top of llvm (still unstable, as far as I can tell, but actively developed)

abcl targets the jvm platform and, consequently, can gives you access to all the libraries in the Java ecosystem.

And then there are commercial implementations like LispWorks and Allegro that have their own benefits: Lispworks has, I here, a very nice cross-platform GUI library and, although I don't know much about it, I suspect Allegro has it's own perks.

And, because of the community's emphasis on avoiding implementation-specific behaviors, which implementation you choose isn't that big a deal: I regularly develop a project under multiple implementations, and seldom have major issues doing so.

You build trust with stakeholders, usually by solving important problems until you reach a point at which having you solve a new problem is more important to them than what you solve it with. It helps if whatever tech you are introducing actually helps you in the task of solving more problems, increasing quality of solutions or reducing cost, in that particular order.

I don't like Clojure (any more), but one nice thing about it is that is includes the Clojure implementation itself as regular versioned dependency on a project-by-project base (thanks to the transparent usage of Maven by the leiningen build tool). Once you tried this, everything else feels like a cludge; there's simply no reason the language version should be managed by some obscure mechanism instead of being a regular dependency!

I'm used to languages like Python, that have a number of files that are modules, and to start a program you run one of them as an entry point.

C programs consist of a lot of files that are compiled and linked into a binary executable.

Whenever I've tried to learn CL, I couldn't really wrap my head around what the eventual program would be. You build an in-memory state by adding things to it, later dump it to a binary. How do you get an overview of what there is?

I mentioned elsewhere I'm learning Common Lisp. I'm also learning Python by translating some Common Lisp code into Python and part of that is to make sure I understand what the Common Lisp is doing. As an understatement, that's meant building some very unPythonic abstractions (yay, me).

Anyway, I think there is a fundamental design difference between Common Lisp and other 'first class' programming languages: Common Lisp was designed as a way to use computers because a computer user would sit down at a Lisp Machine (which was the future when Common Lisp was designed) and use Common Lisp to do ordinary computer stuff like store their recipes and manage appointments and copy files between directories. It's reflected in the :cl-user package not being called :cl-programmer and the logic of typing (in-package :tps-report) on Saturday at the usual time.

Most other languages don't have this idea...In Unix users don't fool around with stdio.h. In Python, there's no clean way of switching between applications at the REPL -- or rather interpreter -- because of how Python handles the hard job of naming things (like most languages, there's a bit of punting on third down with the catchall epithet 'unpythonic').

This makes it hard to get one's head around Common Lisp, but it explains why a person might leave a REPL running for a week and make better progress because of it.

Common Lisp was actually designed so that you don't need a Lisp Machine. The people working on it were from CMU (Unix workstations), Lucid (Unix workstations), Franz (Unix/Windows), Apple (Mac), Symbolics (Lispm, PC), Xerox (Lispm, Unix), and many others.

Users used an editor-based IDE (like Franz ELI with GNU Emacs, ILISP with Gnu Emacs), a special IDE usually written in Lisp (Franz, LispWorks, Macintosh Common Lisp, Golden CL on Windows, ...) or even a Lisp Machine which combines the IDE with the operating system.

> copy files between directories

The Symbolics Lisp Listener (REPL + commands) has a command language/interface with a lot of comfort - but with its own usability problems.

You would often type

Copy File *.lisp.newest >subdir>

instead of using the Lisp function COPY-FILE.

Still today I tend to keep the Lisp IDE running for days/weeks/months... as long as possible. Sometimes Linux tells me I have to restart the machine after some software update...

Sorry for not being clear. I was not stating that Lisp Machines were an intended requirement, my intent was to point out that Lisp machines were the zeitgeist during the period when Common Lisp was incubated and developed and hypothesize that this is reflected in the design of the language.

As a point of contrast, Smalltalk was developed in part with the idea of Dynabooks in the hands of children. Hence its stereotypical use cases presume a substantial difference between the cognitive capabilities of the system's users and the cognitive capabilities of system's programmers.

Common Lisp was originally mostly a simplified and modernised version of Lisp Machine Lisp. It was supposed to be 'cheaper' for users (industry, military, ...), when delivering software. Without the 'hardware dongle' of a Lisp Machine, which would cost a lot - both hardware and software.

One of the main purposes of its existence was to be a standard Lisp able to work on a wide variety of hardware - hardware which was less powerful. Thus the design in the early days was also driven by taking features away from what a Lisp Machine would provide. Less features, easier to implement, able to integrate into different environments.

For example Common Lisp provides type declarations. This was added for non-Lisp-Machines. The Lisp Machine compiler ignored type declarations. It did not use it. The CPU does runtime type checking/dispatching on a Lisp Machine. Always.

Everything fancy, which was difficult to implement on small machines, was removed. The result was CLtL1.

The assumption was that you could develop a Lisp based application (say, an expert system helping with jet turbine maintenance) on some platform of choice and then deliver it to the Airforce on some rugged PC, where it would be used in some airbase. People then thought, why not develop on the PC? Thus various implementations for PCs came up.

NASA was putting it on some embedded computer running million miles away from earth, controlling a spacecraft. There was really no limit to where one would have wanted it to be deployed... thus it ended up controlling your cleaning robot (Roomba)...

I have a couple REPLs for specific projects that I routinely keep running for months at a time.

The idea that user = programmer was part of the MIT AI Lab culture before there were Lisp Machines. For example, the top level of ITS, the PDP-10 OS they used, was the debugger. Imagine if the default Linux shell was GDB!

That's kind of how my old MSX was, essentially a MS Basic shell that you could program on. If you wanted to run a compiled program (a game, who are we kidding?), you used Basic to bootstrap the load and replace itself with whatever the tape had. Good times.

Some vestiges of that persisted for much longer. For example, QBasic, and even VB for DOS, still had the FILES command. Which, as you'd expect, produced the list of files in the current directory - except it didn't return it, but simply printed it out to the console. So it was something rather useless in an application, but handy in a shell.

Less obvious things were having a variety of filesystem-related functionality as built-in statements with special syntax, rather than functions. For example, renaming a file: NAME "foo" AS "bar".

The idea that it is an artifact of Media Lab culture makes sense. One of the analogies I saw is to Emacs Lisp as a language for editing text (and Stallman is listed in the acknowledgements of Steele's book).

In a compiled system like SBCL, things work much as they do in C, except the process is more "programmable."

To recap: when you load a (dynamically-linked) C program in Linux or OS X, the OS will create a new process, and map the dynamic linker (ld-linux.so or dyld) into that process's memory. It then transfers control to the dynamic linker. The dynamic linker then loads the rest of the program into memory (from ".so" or ".dylib" or ".dll" files) and links everything together (i.e. resolves symbol addresses) before passing control to the application entry point.

A compiled Lisp system like SBCL works much the same way, except it uses its own linking/compilation machinery. A new process is bootstrapped by running a C program, which loads the Lisp image into memory. The image is compiled code/data containing the Lisp runtime, compiler, and standard library. After the Lisp image is loaded, you've got two things to work with: COMPILE and LOAD. COMPILE takes Lisp source code and compiles it into a FASL (compiled code and data, equivalent to a ".o" file). LOAD functions much like dyld or ld-linux, and loads compiled code and data into the running process. Indeed, in some implementations like ECL, LOAD is just built on dlopen(). The "eventual program" is ultimately built by LOAD-ing the FASL files comprising the program.

In practice, you don't do this manually. Instead, you use something like ASDF: https://common-lisp.net/project/asdf/asdf.html#Defining-syst.... ASDF functions much like 'make' in that you've got a system definition listing the files belonging to your program. Whenever you call ASDF from the REPL, it'll look at what source files need to be (re)-built, compile them to FASLs, and (re)-load them into the running process. In contrast with C (but similarly to Python), Lisp allows code to be loaded into a running image. In the Lisp workflow, you don't restart the program you're writing each time you recompile (although you can, if you want). Instead, you keep it running, and load or re-load code into it as you work on it.

> I'm used to languages like Python, that have a number of files that are modules, and to start a program you run one of them as an entry point.

You can do that. start a Lisp with a file to load at start, it can then load the dependencies.

> C programs consist of a lot of files that are compiled and linked into a binary executable.

You can do that, too. That's typical when you create a 'system' declaration which describes your software. ASDF would be a tool for that. You then compile the software, load it and dump an image. Some implementations have a more elaborate way to create applications or can create loadable libraries, which you can integrate into other applications.

> You build an in-memory state by adding things to it, later dump it to a binary. How do you get an overview of what there is?

Typically you would work with files. Write the code in files, evaluate the code from there and build the software from time to time as a whole. If you use SBCL, then you get tons of compile time information, type checks, efficiency hints, warnings, ....

You can build your Common Lisp programs from a set of files like any other programming language. "asdf" is the most common system which provides loading complex lisp system. I actually use a makefile to build my Common Lisp based executables from my set of files, no neeed to build up an in-memory state.

The difference with Common Lisp is, you are not limited to this build model, but while your whole code is loaded, you can keep redefining functions for development. This creates the minimal cycle of editing and testing any single function in your program. This is a big asset in development, but in no way means you have to build your systems that way. Actually the contrary, I can only recommend to restart your lisp image frequently to ensure the system loads and builds from file rather than depending on the image state.

As other people have said, you can use Lisp the same way you describe using Python or the same way you describe using C.

You also have a third option: you can use Lisp as a sort of interactive command-line calculator-plus-kitchen-sink. Leave it running and teach it to do odd jobs for you. If you accumulate your odd jobs into a file then you'll have them for later. You could of course save the state of the running Lisp, but a source file is better in that it keeps a nice tidy record of the source code of your hacks.

Serious programs written in Lisp are mostly organized pretty much like serious programs written in other languages: the program is factored into a collection of source files that, in the best case, reflects a logical decomposition of the functionality. There's some form of system loader that compiles and loads the sources in the right order and, if desired, dumps the result into an executable program.

Nowadays most people use ASDF for system loading and dumping, I think, but it's not hard to write your own system loader, and I still know people who prefer to do it that way. Just as an example, CCL still uses its own homegrown loader to build.

If your program is like a Python script then it's a Lisp source file that you pass to the Lisp kernel, in just the same way you would do it in Python.

If it's a compiled executable, then your sources were compiled into the Lisp and dumped to that executable, and it works pretty much like a C program. The main difference is that there is no distinguished main(); instead, Lisps generally treat the Lisp's own repl as the default main function, and the image-dumping tools offer you the option to substitute the main function of your choice.

Things are a little different, but only a little, and it's pretty easy to learn how they work.

Lisp offers a lot of ways to check what there is. You can inspect, search, look at source code, docs, assembly code. All of these things are dynamically inspectable. Tools like Emacs and SLIME make it easier.

You can still look at the files that get loaded. Lisp organizes itself around systems (libraries) and packages (namespaces). It's good to check what packages a system has, then you can check out the symbols provided by a package.

Lisp isn't totally wild-west in the concept of the Lisp image. There is organization to good code.

In Common Lisp, you write to source code files and then use ASDF/Quicklisp to compile/load that project. If you feel the need to create a standalone executable you can dump the image with an entry function specified. It's essentially the same as python, although standalone executable are less prominent than in-image programming.

I've been working on a CL project for a couple of years. Was my first big stab at using CL for something other than a toy.
Sbcl is a nice choice, but far from the only option. It has many tradeoffs.
CL is not without its frustrations. Documentation that has not aged well. A community that can be less than welcoming.
(in contrast to say the Racket community)
Inconsistencies, e.g. first, nth, elt, getf, aref...
However portability appears to be a strong point vs on the scheme scene.
Single binary compilation on SBCL/LW/ACL/CCL are great. Found GC to sbcl to be lacking on large garbage rates. Tended to promote garbage to a tenure that prevented it from being removed. It would max out 32GB of memory, even with explicit gc's enabled between files. Where as the other implementations would stay below 4GB.

Yeah, I need to examine a bit more of the situation with local-projects. I reject ~/common-lisp because I have my own directory structure, thanks. If I can symlink farm under ~/common-lisp, then what's the difference? :)

I do need to study the matter. I've found the ASDF docs to be unusually opaque (last time I read them), and my current solution to be Very Simple (and a little Stupid), so I've been content not changing the setup.

Indeed, I think nearly every section of the chapter on configuration requires you to know the contents of all the other sections for it to make sense. I have probably spent a few hours going over that one chapter at this point...

That said, CLHS isn't bad - it's lightweight, available off-line, and on-line you can pretty much always find what you're looking for by searching for "clhs [term of interest]".

> some libraries had a compatibility matrix... with common lisp implementations. that seemed weird to me.

Common Lisp has a pretty good standard, but it's a bit old so it didn't predict some things we're currently using, and also left some other things to the implementers. So some features are only available via vendor-specific extensions, which makes it necessary for some projects - especially compatibility libraries - to be tested across plethora of CL implementations.

The flip side is that you have a few commercial and open source implementations to choose from, which is valuable if your project can exploit strengths of a particular one.

> Why hasn't anyone made a more eye-frendly version of the Common Lisp Hyper Spec ? Having good, easily-browsable documentation is a core-problem.

The hyperspec is only distributed under a restrictive license. The TeX sources of actual last draft of the ANSI specification is public domain though. This means that someone would need to retranslate from the TeX sources rather than modifying the hyperspec.

> Why hasn't anyone made a more eye-frendly version of the Common Lisp Hyper Spec

I mostly access the HyperSpec using Erik Naggum's hyperspec.el while programming in Emacs, configured to bring up Lynx. A lot of other people probably do something similar, so there has not been much interest in adding CSS.

Types in Common Lisp may or may not be represented by classes (all classes are types, not all types are classes). In this case, character is indeed a class, while base-char is not.

It's definitely true to say that base-char is a subtype of character, in that some characters are also base-chars, while others are not. (The spec defines that there are at least 96 standard characters, all of which are of type standard-char, and thus also of type base-char. Implementations may add others. Interestingly, the spec doesn't define how those characters are encoded; of course most modern implementations use some variant of Unicode.)

It's a seamless dynamic programming language, which can, with care, be given excellent performance and a high level of abstraction. In my opinion, it's miles better than the other dynamic languages out there, by nearly every factor. It rewards investment and development very well; it's a tool for mastery, not for quick and easy starting.

If you're looking for statically typed languages, it's not going to win there. But my experience writing a lot of Perl and Python, with a certain amount of helping with Clojure and Ruby experience, strongly indicates that Common Lisp is very very competitive there outside of the 'library' front.

The Common Lisp library ecosystem isn't as polished as it is in Python from what I know. Nor is the documentation for many libraries up to scratch, even the stellar ones. Some libraries merely document the functions provided and don't go into how to use the library. Even stellar libraries such as Hunchentoot do this, which I find rather annoying after using Python libraries which seem to make it an important point to tell you how to use things.

I love CL and the idea of Lisp in general, but its modern unpopularity makes it hard to ask questions (because chances are you won't get a response any time soon) and the library situation is a real downer.

Sometimes I get the feeling that the lack of CL libraries is somewhat self-imposed. "We don't need no stinking libraries. Since CL is so AWESOME, a competent CL programmer can reimplement whatever he needs in a fraction of the time that programmers using a lesser language would need even with the help of ready-made libraries". See "smug Lisp weenie".

Not saying that the above is actually factually true; certainly quicklisp has done a lot to make 3rd party libraries a lot more approachable.

I'm not an experienced web/http developer, but it [1] seems pretty understandable and even contains "Your own webserver (the easy teen-age New York version)" section for those who don't care, give them teh routez guys. define-easy-handler serves exactly that purpose.

From what I'm able to recall from my childhood, all docs were like that.

That feeling may be the result of some questionable syntactic design decisions in Clojure.

Hickey went over Lisp syntax and tried to remove parentheses wherever possible. (It was the hip thing, PG's Arc did it, too, which may be where Hickey got it from.) As a result, you get a sub-par editing experience —generic sexp-based structure editing doesn't get you as far as with a Lisp— and diminished readability once you get above three omitted pairs of parentheses in a row.

On the other hand, Clojure arbitrarily mandates a secondary kind of list literal in some places of the syntax, again making Clojure harder to write and edit. I don't even see a readability benefit, but of course YMMV.

There are some other practical answers here so I'll take a different angle.

Fun! CL is a language to play in, after a day of wrangling Java & ObjC issues I love settling down to just play in an environment that lets blast some code out and play with ideas. Of course this applies to other languages too and this is dependent on your interests, so the case I want to put out there is:

Even if a language isn't suitable for your current business needs, see if it gives you joy. Languages have trades offs to meet their goals, evaluate languages for pleasure too.

Also come visit #lispgames on freenode sometime..most of us are procrastinating making engines but it's always nice to have new folks around.

Better exception handling (conditions and restarts), advanced OOP capabilities, proper macros which are convenient to use (pretty much requires a language to be homoiconic), image-based development with ability to hot-swap any code in a running program, including but not limited to full class redefinitions without losing data. All that with resulting code able to achieve close-to-C++ performance thanks to very good (commercial and open-source) compilers. Not to mention the stability of the language thanks to the ANSI standard.

You might find it pleasant. Who knows? Maybe you're a professional and you'd want to understand CL so that you can embrace and extend the system you've inherited.

> Examples?

Conditions and restarts. Incremental compilation. CLOS. Numerics.

> 4. Sure, that's true for every language and tool. Choose it when it helps you, don't when it doesn't. But why would CL help me more than another language?

CL is more like a system than a compile-and-run language. The compiler, debugger, standard library, and platform libraries all live in the image. You build the program as you go, incrementally, and can inspect it while it is running.

An error doesn't halt the entire process... instead condition handlers can either choose a restart (including asking the user what to do) and the execution continues. That's where you can attach to a remote image running on a server, notice that there's an error with a particular request, inspect the entire stack, fix the problem, and continue the request. No need to crash or anything like that. If a more hands-off approach is necessary then a handler can be written to choose an appropriate restart.

CLOS is great. You can change the class definition in a running image and the instances will incrementally update without recompiling and restarting the program (and rebuilding all of that state).

You can develop the language you need with these tools instead of using the language you've chosen (or inherited). Need an interactive prover? Try ACL2. Experimenting with sequent calculus? Want a language built around recursive fractals?

This is a question that has been answered many times over the years. Search for "why common lisp" and you'll get a long list of them.

The answers now, IMO, are not substantially different than they were, say, ten years ago. The big differences, I think, are the new competitors in the LISP world: the rise of Clojure, and the rebranding and further development of PLT Scheme as Racket.

If you're so inclined I'd make it a "living document" that gets updated as the state-of-the-art evolves. Writing CL in 2017 is not likely to change rapidly in the next decade but even compared to what writing CL was like 8 years ago it has changed enough.

The question was how do they compare. 90% of your list applies to CL as well. (CL has sheeple as its prototype based Object system).

The statement about macros is more nuanced that you let on, language decisions like being a Lisp-1 w/o a symbol namespace (CL packages) make hygiene a necessity which in turn complicates writing macros. One thing I do like about racket is that it comes with support for writing pattern directed macros, that although possible to write in CL has no built in tools for it. Another point for racket is that it has support for macros with better error reporting (ej the elipsis).

15 is really the main selling point for racket. It is a Language creating toolbox (I forget the term they use for it). It's module system stands out in that regard compared to CL, which has read tables as a way to customize the reader. Racket's solution is more general and extensible in that regard.

16 is not true when compared to CL. The IDE of racket is way behind slime/sly, however it doesn't require setup.

The other big difference is that CL is image based and suited for interactive development while Racket is batch oriented, 'python-style' of interactivity. If you come from Python you will think racket is interactive, but if you come from smalltalk you'll know what you are missing.

This may be true for an experienced user, but I've mainly used VS Code/Atom/Sublime etc, and I did not enjoy DrRacket -- I'm assuming that's what you're talking about. Some of the dropdown menus didn't render for me, and while this may sound stupid, I had the damnedest time figuring out that half my screen was a REPL and the other half a file.

How long ago did you try it / what platform are you running on? It was a few years ago now, but they rewrote the GUI libs to be native rather than whatever x-platform lib they used. DrRacket seemed to get... better then.

I mostly use emacs, but was playing with paredit for DrRacket with emacs keybindings the other day and decided that I could get to like some of the other creature comforts of DrRacket.

Note that a lot of what Faré likes about racket is somewhat intrinsic in there being one implementation, and a big part of why racket branded itself away from scheme (it was formerly PLT-Scheme).

In lisp there are still a lot of very different implementations in use, so if you want to "grow down" you either have to be non-portable or do a lot more work.

I love Common Lisp, so I will say "learn Common Lisp" but you'll probably be just as happy if you flip a coin (and I would recommend doing so rather than debating much as learning the "wrong" one now is probably better than learning the "right" one in the future).

My personal, subjective, feeling is that the Racket offers an easier entry path (docs, libraries and library discovery, relative non-cruftiness, package management via raco, IDE, etc.). I also like that Racket is a little more biased toward FP than CL. I'm told the macro system is ahead of CL's, though I can't say I deeply grok macros.

The Racket community always feels friendly and welcoming to beginners, which is something the CL community hasn't always been.

I do like that CL has a standard with multiple implementations. That said, the standard feels old, and you can quickly run into libraries that were built with less-than-universal compatibility. Things like tail recursion in CL exist commonly in implementations, but not in the spec.

I've tried to carve out a bit of hobby time for lisps over the past few years. I started with CL and fought with the tooling and I could see the power, but I never felt great about my abilities with it. I tried Clojure and periodically use it as a stand-in for Java, and in that sense it is good. But these days, I've been playing in Racket, and it feels like the lisp I wish I'd started with.

One other item: the "enlightenment factor"—I'm really far from enlightenment, but I can see the crazy stuff people are doing in Racket with related, compatible languages, like Typed Racket and have a deep suspicion that Racket might carry me further on the path of enlightenment. That's down-the-road stuff for me though. Right now I'm enjoying small-time Racketeering.

For people using macs, it's probably worthwhile to mention CCL's IDE, which you can easily build from within the CCL sources using (require :cocoa-application), or which you can get for free from the Mac App Store (it's called "Clozure CL").

It's a little bit bare bones and a little bit perpetually unfinished, but it works and it gives you a Lisp-aware environment for editingand running Lisp code, and even has a few handy tools.

Ah yes this is exactly what I needed. I was recently trying to start a CL project but I had trouble wading through all the outdated material, especially with regards to including external packages. Thanks for putting this together!

I'm going through a similar process. My caution is that 'package' has a very technical meaning in Common Lisp that is at odds with how 'package' is used in other languages (and a bit at odds with how the author uses it in their tutorial).

A package in Common Lisp is a set of interned symbols. In Common Lisp, systems are more in keeping with an ordinary understanding of packages...but combined with the idea of a build system just for fun.

ASDF is a way for managing systems (but it is worth keeping in mind that Common Lisp does not have any 'official' understanding of systems). ASDF is pretty much a de facto standard by consensus.

Quicklisp is a 'package manager' in the sense that it will go out and fetch a dependency from a repository. But what it fetches is a system: it is usually not a package in Common Lisp's technical sense.

From the Quicklisp FAQ:

How is Quicklisp related to ASDF?

Quicklisp has an archive of project files and metadata about project relationships. It can download a project and its dependencies. ASDF is used to actually compile and load the project and its dependencies.

ASDF is a little like make and Quicklisp is a little like a Linux package manager.

On the other hand, Common Lisp is very stable around ASDF and SLIME and QuickLisp. ASDF was started in 2002. Quicklisp in 2005. SLIME in 2003.

0. It's a hard problem. I have a deep respect for your trying to tackle it.

1. The hard part is that it all matters on day one and everyone will get a lot of stuff wrong for a long while and some stuff wrong always when attempting non-trivial projects. This is the rule whether it's Python or Common Lisp or Racket.

2. I'm a fan of polyglotting languages in general and Lisp's in particular. For a person really just starting out, I'd point them at the Racket ecosystem because it is designed to be newbie friendly with student languages and Common Lisp is designed for production programming on hard problems.

2. For people with some programming experience and simple curiosity, I'd just advise them to install SBCL and play around in the REPL. With an exercise of adding a script to wrap it in readline. And some exercises using the text editor of their choice to load and read and write files and such.

3. I don't think that there's a way to add training wheels to SLIME and Quicklisp and ASDF as the development environment. It just won't ever be DrRacket. At best it produces something like Aphyr's Clojure from the Ground Up...which uses Emacs; is more like a book; and definitely a labor of love. It also dives into the details at day one.

The niche of "welcome to Lisp, here's how to code" has been super well filled over the years. And regularly people write sort of intros to SBCL, CCL, etc. articulate-lisp isn't intended to do that - there are a few notes in that regard, just to whet your thoughts - and because I was bored - but it's not a thing there really.

But what is typically lacking is how you go from just a Lisp environment to a development environment that lets you operate at a professional level. Articulate-lisp is intended to deliver the thumbnail of how to get that put together, along with assorted references to further study.

At the time I created it, there was nothing really suitable for pro development out there. I guess roswell and portacle are things now.

Training wheels aren't really my bag of things. I'm notorious for preferring to read the O.G. paper on subjects rather than work through tutorials and simplified whatsits. But giving all the data at once doesn't provide the map of the territory that newbies crave.

There's no Royal Road to Lisp... or geometry. But a map to get you to where you're going does exist for geometry, and Lisp, I think deserves one too.

I think a resource for "leveling up" is probably better if it is opinionated. I mean, there are reasons a person might not want to use Emacs for Common Lisp development (aside from using a product with a built in IDE), but there's no reason to handle edge cases which have little to do with "leveling up" on Common Lisp...there may be a SLIME mode for Atom, but someone who chooses it is swimming upstream in terms of Common Lisp.

The situation is similar in regard to Lisp installations. There are good reasons not to use SBCL, but they probably don't have that much to do with "leveling up" (again outside the commercial IDE world) and trying to cater to those non-leveling up reasons is a distraction.

To put it another way, a person who is just starting out is not in a position to make decisions based on experience. A year later, they may have the experience to make informed decisions because they have learned what matters and what does not.

I hopeful for Roswell and Portacle, but not terribly optimistic in ways that are similar to when I hear about a new Linux distro. The hard work is not the exciting honeymoon period. It's grinding out maintenance over the years without getting paid. It's designing good features for other people without getting paid. Most projects cannot do it.

Part of the problem is that leveling up on Common Lisp is mostly a matter of will to RTFM. Sure a site can have a great article explaining Common Lisp packages, but to understand packages, readers will need to understand symbols and so the options are:

1. Expect the reader to already understand symbols.

2. Describe all of Common Lisp.

3. Accept that the reader will still have a lot of work to do after reading the article.

1 and 3 collapse into similar requirements for an author. 2 works if the author writing a book and really knows their stuff.

Wondering if anyone has any experience using lisp for machine learning? I'm aware of mgl[0], but it seems to be abandoned. The lack any wrappers for tensor flow or caffe is also a bit surprising to me. The cliki page [1] is also unhelpful and out of date. Is machine learning on lisp dead or are there projects out there that I'm just not aware of?

Not directly, but have been following various projects over the years..

My take on this is that the people using CL for machine learning have been doing this for some time, and so have their own toolsets; tensor flow is relatively new in that regard, and w/r/t lisp interfacing would entail low-level binary interfacing (and therefore mostly non-portable between implementations) to hook CL code with tensorflow kernels (definately not an expert here on either however).

also, what is popularly referred to as 'machine learning' is in my opinion mostly
only one aspect of the field - e.g. classification neural networks - while lisp can
definitely do this, lisp AI programming (in my amateur opinion) shines more in the realm
of machine reasoning/inference due to the symbolic / dynamic nature of the environment - e.g. constructing a set of reasoning primitives (e.g. functions and facts and decision trees) and a meta-interpreter to reason/infer about external data and walk around a problem space.. also, owing to the dynamic and rapid development nature of the language, likely many people are working with their own prototype/core frameworks, possibly cobbled together from various small bits and pieces of 3rd party code. Also, neural networks have been around for quite a while - what these new frameworks bring to the
table is not so much new core algorithms, but the ability to quickly cobble them together in a more popular/user friendly way, and also take advantage of fast hardware (e.g. GPUs)

as for projects - in the general sense, lisp has been a latecomer to the 'languages with cpan-style trove of public addon modules' crowd, owing in my opinion to the need to support multiple implementations in order for such a project to take hold - so older but yet still quite functional libraries might be around in various hodge-podge repositories which are not standardised but old timers have already included in their own local systems, etc. (see also CMU AI repository)

In the last few years, much has been done in the module space - I would
definately consider 'quicklisp' to be roughly the defacto definitive list of current modules, especially those under active development, since the active community is basically converging on this as a module/distribution platform and so many (most?) active community projects are available as quicklisp modules and included here - and so would probably be the one of the first places to start in checking into for available libraries for any topic.

Also, the best way to 'explore' quicklisp is to install it, and then install various packages and then muck around/explore with the source code that they download into
your environment - the documentation tends to be much less 'external' (e.g. websites) and much more 'internal' (e.g. READMEs, in-tree code examples or unit tests).

If emacs is an obstacle to Common Lisp in 2017, maybe what's needed is a Lisp-interaction plugin for vi(m) (or whatever it is that vim uses in lieu of emacs modes). I don't get the hype for modal editing but you can't argue with the data clearly showing emacs users are in the minority.

I've never used Emacs, so I can't really say how it compares to Slime, but I've thoroughly enjoyed using Vim-Slime[0] and Tmux[1] for development in lots of languages (CL, Racket, Clojure, Ruby, JS, SQL, Haskell, Bash, etc.).

I wrote a new CL plugin for Vim, which only depends on +channel and supports most SLIME features: https://github.com/l04m33/vlime
It's quite new, and I'd be grateful if you could try it out and provide some feedback.

Really! Do you use it? Would you feel comfortable submitting a PR explaining how to set it up? I HATE recommending emacs as the IDE for Common Lisp if it's not something people are comfortable with already.

I had (and still have) a similar aversion to emacs, so I used to use Sublime with SublimeREPL plugin, which allowed me to run Common Lisp and Clojure (or any other language with a REPL) inside the editor. Pretty much a makeshift IDE, with an interface you're already used to. I've since moved on to VSCode, and I bet there must already be a way to replicate this in it, I just haven't had the need to work with Lisp since to explore this.

I'm not sure I understand the question. If you need to check if a value is null, you compare it with :null. It does mean that values you are going to serialize out to JSON should use :null rather than the more lispy nil, but JSON interfaces where you are required to send null values are quite rare (in fact I don't recall ever encountering one), so it hasn't been a problem for me in practice.

So, I really dislike truthiness and falsiness in other languages but somehow, in Lisp, it just seems to work. One thing that helps is that, in places where the distinction between null and false matters, other features of lisp help keep them distinct: so, for example, getting a value from a hash table returns nil if the value is not found and, consequently, you can't distinguish a missing key from a stored null. GETHASH solves this by returning multiple values: the first is the value stored (if there is one) while the second indicates whether or not the key was found in the hash table. Similarly, optional arguments [declared like (defun foo (&optional bar))] default to null but, if you want to no whether a value was actually passed, you can change the argument declaration to account for this: (defun foo (&optional (bar nil bar-p))) In this case, when bar isn't passed, bar-p will b nil but when bar is passed, it will be t.

I wrote in Common Lisp the star map generation software at the core of my startup, http://greaterskies.com, and could not be happier. But now that it's getting off the ground I wonder whether it may adversely impact my chances of being acquired. Are there any known examples of recent CL-based startups?

Watch YouTube video from Baggers. It's a lot more complicated than your average windows user will want to go through. Than you have to setup EMACS, quicklisp...etc. I never really new what quicklisp was doing and it made me nervous (I trust VS nuget).

Everything should Just Work (TM) on Windows these days. SBCL provides reasonably fresh binaries to download. Quicklisp can be installed by entering the necessary commands into SBCL console. Emacs has a Windows installer. SLIME can be installed from Quicklisp. And so on. I develop in Lisp on both Windows and Linux machines, it works exactly the same (which cannot be said of some other languages).

Portacle[1] and lispstick[2] both provide portable sbcl-based development environments on windows.

Quicklisp is really simple to track what it's doing, and trivial to manage multiple installs of quicklisp simultaneously (though you can only use one install at a time in a given lisp image). Everything goes in <install-directory>/dists/<distribution-name>.

If very quick startup times are a necessity, then you just build a standalone image. This is very much like deploying a program in e.g. C. You build it with asdf, and then install it (and any dynamic libraries) into your deploy environment.

For "long running" programs, this may not be as necessary, and since the lisp runtime includes a full lisp compiler, it's not uncommon to just have a script that launches your lisp executable, uses ASDF to load the system, and calls the entry point.

For either one, it can also be useful to expose a swank server, though that does have security implications (anybody who can connect to the swank server (either localhost or unix domain sockets) can run arbitrary code).

Since you mentioned Light Table, I'm going to guess that you're open to other lisps. I think this is one thing that the Racket folks get right. My wife's first foray into programming was a coursera course starting with a simplified variant of Racket, and the dev environment could not have been less of an issue.

Lisp and Emacs have decades of shared history, it's unlikely someone will really break this bond.

It would be nice to have alternatives, of course, but there is a reason certain people like emacs. (To be fair, I was an emacs user before I came across Lisp, but I can imagine how annoying/intimidating/frustrating it can be to somebody who has never used emacs before.)

i do not know how to message the guy "Scott" author of page, so i am putting this here.

in the "LispWorks CL" page, under "Implementations", the "Notes" section elicidates a mystery about the Personal Edition not recognizing the lisp init files. This is actually a limitation in LispWorks Personal Edition which is described on the link provided to retrieve said edition.

Actually the Personal Edition is mostly for people to play around a bit or students doing some homework.

Other than that the typical user will buy a version, which won't have the above limitations. Users also may also test a time-limited full version, before buying. People use either LispWorks or Allegro CL, then.

In some school/university course it would be like that.
The students get a simple installer, get the IDE up and running after a few clicks and thus they don't have to learn GNU Emacs + installing Lisp infrastructure for some basic homework.

I am sure there are plenty of people who have a preferred Lisp. It does drive me nuts how if there is a post on R the top comments are people touting Python as being the bigger player in statistics and data science (Which it isn't) or a ton of other languages.

Came here to share my love for Racket. Parameterization, custodians, threading, channels, message inboxes, and the packaging tool is also great. I just recently published my first application to the Racket package repo, and I found it to be the least painful experience so far.

That right there is what I am talking about. I make a point and people have to attack that that the language is pointless when it is the number one language in that domain, which just happened the last two years.

Why would you ever have to rewrite the code? Python isn't faster and has less function then R and if you want you can just drop a few lines of Python in a cell of a Notebook. I like Python and its a good choice but R is a great language that in fact that Python's pandas library is trying to make a R equivalent.

R is useful only more in pure analysis - interfacing to other systems and dealing with application/control logic is not it's strong point, so unless you are embedding R into a larger project (e.g. R batch scheduler), if you need these features, you'll likely need to rewrite.

>I make a point and people have to attack that that the language is pointless when it is the number one language in that domain, which just happened the last two years.

That's not true though, R doesn't have anywhere near the ecosystem that Python does for Natural Language Processing, Web Frameworks, Machine Learning, Computer Algebra and Symbolic Reasoning, Systems Programming, Image Processing, Document Processing, and other things I don't know about but if I needed something else then I can use Python confidently that there will be good packages for them with a community around it.

The entirety of R's unique mindshare is that it has a million variations on linear regression and contingency table tests. And frankly they're all so simple to implement that if you can't be bothered to learn how to implement it in 2 lines of Python then you probably don't know what your program is actually doing.

R is something that caught on because it made its statistics package top-level, saving keystrokes for statistics and bio-statistics professors who never needed anything else and didn't know how to otherwise program. It's unique syntax has lead their poor students to have to learn C-family syntax many years after they could have been working and being productive with it.

Now some companies are accommodating R for their entry-level data scientist positions in order to hire cheaper help that can't find better options, but their skills are limited by being disconnected from the rest of the programming world in both packages and syntax.

Your comparing a general purpose language and a domain specific language.

Domain specific just talking in statistics and data science. I don't need a webframe work I need to make my report and have the charts work. I also push out my reports in Word and PowerPoint and I can't do that in Python but I can in R with ReportR library (The whole reason why I left Python and Pandas and came to R)

I say if you want to do more then the domain specific then sure Python is an awesome language and you can use your Python skills for more, but if you want the extras that the domain specific language gives you come on board.

To say R is a "Pointless Language" is very troll like especially with companies spend millions of investment and infrastructure that have come to R in the last 24 months.

I could also say what I always say about Python. It is the World's Great Second Best Lanaguge. It doesn't do anything "best", but it is a awesome second best and if that is fine with you have fun. I love Python but I also see the limits of the language. I thought 15 years ago Python would revolutionize everything and everything would be written in it. We now have Lua that took over for game scripting. We have C++ still ruling the day for applications. Web development has been dominated by JavaScript and Mobile development and Python is well .... So the two largest platforms aren't really impacted by Python.

> Now some companies are accommodating R for their entry-level data scientist positions in order to hire cheaper help

It isn't a clear picture but NO way is Python dominating in pay or work. R is a GREAT language and so is Python but for some weird reason Python community hates on R. While the developers of R and Python respect and work with eachother and help both sides. I am shocked you feel Python is so over powerfully awesome.

In addition to the other problems described, Clojure just breaks the value of Lisp's syntax. Common Lisp does have its irregularities but code littered with Java imports and square brackets might as well just use C formatting and be done with it.

The primary value, to me of Lisp syntax is that code is represented as data structure literals which can be manipulated as easily as any other data structure prior to execution. Clojure has that; it just comes with literals for a few more data structures and uses two of them, vectors and maps in the syntax of built-in forms.

I think the motivation for making the sequence of arguments in function/macro definitions vectors instead of lists was primarily syntactic - to make them look different from the body. They're definitely vectors internally though.

If it was powerful enough to warrant use, someone would hack it onto tools.reader. It's obvious where it would go. For some reason, nobody has bothered with this. Likely because the reader already has programmable data (via tagged literals) and anything beyond that results maintaining your own special reader.

I haven't yet encountered somewhere that I'd have used reader macros for that wasn't better solved by using data literals or tagged literals. There might be some random place where I need syntax beyond clojure's data literals, but I'm far more likely to use a combination of data literals + tags. If absolutely required (it hasn't been yet in 5 years of daily production usage), I could simulate custom grammar using clj-antlr and a macro. If, for some unknown reason, I needed a custom non-sexp grammar embedded in my lisp that I wanted to serialize exactly and then read back in the same syntax (probably because I hate myself and/or my team), I could hack it tools.reader.

tldr; you have reader macros if you want them. Nobody wants them who uses Clojure day to day. If they did want them, they could trivially extend tools.reader themselves.

Personally if I wanted reader macros in clojure I wouldn't bother implementing them because it wouldn't be worth the pain to me. However, if clojure had them I/libraries would likely make good use of them.

It has a reader as a library. It's entirely possible for you to change its lookup map to a defmethod and go to town altering s-expressions for arbitrary syntax if you wish. You're not hacking a new reader here, you'd refactoring something that probably should have been a generic method to begin with into one. Nobody has bothered with this because it's not useful to anyone, as far as I can tell. People who want non-data syntax write it with clj-antlr/instaparse (which are very easy to use) and use a macro when they want to interleave it into their code. These universally end up with a data-based AST that is readable and printable.

If you know Common Lisp reasonably, you'll probably have some insight into Clojure, e.g. Leiningen is a system definition facility; seq is an extension of sequences; and multimethods are generics much like methods. I'd say Clojure stands on the shoulders of giants.

There are some Lispy things that Clojure does out of the box that are more Lispy than than Common Lisp does out of the box, e.g. lists and other seq's as functions in the function position of an x-expression. There are things Clojure will not do such as reader macros. Clojure's syntax is a bit different, but generally reads cleaner in the same way one might say a font reads cleaner. YMMV.

> lists and other seq's as functions in the function position of an x-expression.

Makes code more difficult to read. I consider this a language design error. I had that on the Lisp Machine 30 years ago (example callable arrays, ... - maybe even Maclisp in the 70s had it), few people used it and it did not make it into Common Lisp.

Common Lisp was designed such that for the programmer and for the compiler the first element of a Lisp form is easily recognisable as a function: it has to be a symbol naming a function or an actual lambda expression.

Pays back it code maintenance over time...

> but generally reads cleaner in the same way one might say a font reads cleaner.

Personally, I find code easier or harder to read in the same ways I find prose easier or harder to read -- it depends on the author's ability to tell a story and the story the author is trying to tell and my interest in hearing the telling.

Seq's as functions are sometimes more readable to me for the same reasons code that uses reader macros may be more readable to me (even though reader macros may do away with normal s-expression syntax entirely). But again, mileage varies.

But this is true for anything that's in the callee position, right? Either it can be determined (possibly via static flow analysis) that it's a particular function, in which case it is inlined; or else it is an unknown function, in which case it won't be. It would seem that statically verifying that something is a vector reference isn't really harder than doing the same for a function, all else being equal.

Depends. Since the developer can use low-level functions and provide type declarations, a compiler can generate usefully fast code without too much work. More advanced compilers need less declarations, since they do some amount of type inference. One of the costs: the compiler then usually is quite a bit slower.

I wish an experienced LISPer would explain why should one use Common Lisp over a language like Golang. Golang now has https://github.com/glycerine/zygomys for scripting. For that matter, why would one choose Common Lisp over GNU guile ? (guile now supports fibers). What does Common Lisp offer for the working programmer that is an advantage over other languages ?

"I wish an experienced LISPer would explain why should one use Common Lisp over a language like Golang."

Those are two very languages. Very few people would find themselves ever staring at a choice of what language to use, having narrowed it down to just those two.

Lisp is notorious for its mind-expanding freedom. I think perhaps this attribute of it is less unique than it used to be, but it is still present. While I have hard time recommending it for production usage, you can still learn a lot about programming from using it for a while. It is still one of the most programmer-empowering languages there is.

(Indeed, I think the vast bulk of the reason why it's not really all that great of a language and why it has never taken off is that it is too programmer empowering; it grants power the vast, vast bulk of programmers are not actually capable of handling well at scale. It makes it so that two programmers separated by some communication gap rapidly find it challenging to write code that works together. That said, deliberately spending some time in such an environment is an important step for the budding systems developer, I think.)

> It makes it so that two programmers separated by some communication gap rapidly find it challenging to write code that works together.

The problem is not within a team, distributed or not. That will work, even for larger teams with some guidance. It's more the application and libraries which the team produces over time. 'Syncing' them with the outside can be hard. But you'll experience that in Java applications which use large frameworks, too. Same in Javascript.

This made it important to use a common Lisp standard and was the reason DARPA called for the creation of Common Lisp. Every DoD contractor brought their own Lisp dialect with their application, which made maintenance and sharing difficult and costly.

The standard could have been about more, concurrency/graphics/..., but much of that was rapidly evolving technology and it made little sense at that time to create a real standard for graphics - later Lisp didn't have enough commercial backing for these things and people were moving on to simpler / more conventional languages like Java with lots of industry support.

It's an expressive language with a powerful development environment in the form of the REPL plus Slime. I love using Lisp for exploratory programming, as it's so simple to write something that both works now and makes sense later.

It has a wider range of built-in programming paradigms more fully realised than Go or guile. In comparison to CLOS, all other object systems are just kidding. Or you can program functionally or procedurally if you like. This gives you a better toolbox.

It's a mature and stable environment. I can run code from 15 years ago or I can update it to use the latest libraries at my option.

And I just find the regularity of the parentheses has much less of a screeching brakes effect on the experience of the code. It's smoother to read and write.

So I find common Lisp a productive pleasure to use. If I was working on a project other people had to read and maintain I'd use Go, though, as it's easier to apply skills from more common languages to more quickly.

Most of the traditional definitions of "memory safety" are not violated by having null pointers, as long as they just crash when used incorrectly. nils in Go don't let you start accessing things you shouldn't or anything. Golang is memory safe by most definitions. (Possibly not by a definition that includes concurrent memory safety. I expect in 20 or 30 years the term "memory safe" will indeed involve that. But at the moment, it doesn't, because almost no language has that right now.)

This doesn't invalidate your underlying point that you want to use a language without implicit nulls... it's just the wrong terminology.

I'd put that one under type safety, and it definitely seems bizarre to me to make a new language that claims to have strong, static typing and allow arbitrary types, or pointers to arbitrary types to be null.

Go's nil is statically-typed. The nil keyword is polymorphic (as numbers in the source code are) but individual nils that occur at runtime in Go are actually typed. You can actually put methods on them and they execute just fine:

While I'm actually still in agreement that I don't love a language with nil pointers, it is less bad in Go because they are less pointy and stabby. It's not like C where they are automatically a crashing offense; they are valid values treated in sensible ways in a lot more places than in C, because in Go they still have their type associated with them. But I'd still like to be able to declare a non-nillable pointer type.

The type of s/p there can't be guaranteed to be (S not nil) at compile time, which is a bizarre decision in an otherwise-statically-typed language where the type system is designed to provide runtime type safety. You still have to do a null check at runtime if you want to be safe here

While it's nicer that you can perform the check inside Test() instead of before the call, I don't understand why the type system doesn't just prohibit this and make you use a union type if you want (S or nil) given that a lot more situations call for (S not nil) than (S or nil).