Posted
by
samzenpuson Monday May 17, 2010 @02:00PM
from the read-all-about-it dept.

eldavojohn writes "Programming Clojure by Stuart Halloway was very near to the perfect book for me. It covers many things common to many Lisp languages while highlighting in moderate detail the things that make Clojure unique and worthy of some attention. The book spends a large amount of time dealing with the intricacies of interfacing fluidly with Java (down to a package rewrite inside a large project). This fits me perfectly as a Java programmer, and I now feel ready to experiment with peppering functional language capabilities into an object oriented language. The book also strives to show how to simplify multithreading through functional programming, which is good because I find multithreading in Java a serious headache that few are good at. Programming Clojure, released in May 2009, is currently the only book out there devoted to Clojure, and the introduction is written by the language's creator, Rich Hickey, who says, 'What is so thrilling about Stuart's book is the extent to which he "gets" Clojure.' The book earns its place on the Pragmatic Bookshelf by guiding the user through rewriting a part of Ant into a new build tool called Lancet — adding to the project what you just learned about Clojure at the end of each chapter." Keep reading for the rest of eldavojohn's review.

Programming Clojure

author

Stuart Halloway

pages

304

publisher

The Pragmatic Bookshelf

rating

8/10

reviewer

eldavojohn

ISBN

978-1-934356-33-3

summary

A firm definition of Clojure via examples coupled with the beginnings of actually programming Clojure.

First, a lot of you are probably wondering what Clojure is and asking me why you should care at all about it. Well, Clojure is a functional programming (FP) language that runs on top of the extremely pervasive Java Virtual Machine and in doing so seems to offer a simpler way of multithreaded programming. It belongs to the family of languages that are Lisps and as a result this book covers a lot of remedial material that is common to other Lisp languages. If you're a serious lisp programmer, you'll be able to skip some of this book (the intro will guide you). Clojure has rarely been mentioned on Slashdot with the resulting comments revealing largely confusion or considering it a buzzword. It's going to be hard to write this review about the book instead of the language being that 99% of what I know about Clojure comes from this book. If you work through this book linearly, you must also use the command line read-eval-print loop (REPL) that, similar to Ruby's IRB, allows you to get hands on with Clojure and Halloway's examples.

Both Hickey and Halloway are very active in Clojure development. In fact, Halloway has a video out on types and protocols, new developments in Clojure 1.2 since the book went to print. Halloway does a good job at providing examples, keeping the book pragmatic and showing you the "wrong" way before incrementally showing you how to correctly accomplish various goals in Clojure. But he loses two points on this review for two reasons. One is that he over evangelizes about Clojure. It would lend a lot more credibility to everything else he says if he would just relent and abstain a bit from painting Clojure as the best language for any task. This ties into my second point which is the fact that books on programming languages are supposed to give the reader two very valuable things: knowledge of when to use the language and knowledge of when not to use the language. Programming Clojure is lacking in the latter--this is not a unique problem as most books about a language really sell their language. All too often in my professional career I see a solution and think, "Wow, that really was not the right tool for the job." (I'm looking at you, Java) Clojure definitely has its strengths and weaknesses despite very little evidence of the latter in this book although I was directed to a QCon presentation where the author speaks more about where Clojure excels in real life.

That said, the book is a great fit for the object oriented Java developer who does not also code a lisp-like language regularly. I say that because Chapter Two deals with reviewing all of the facets of Clojure--most of which are found in other Lisp languages which might be seen as remedial to a proficient Lisp developer. However, before you skip it entirely, there are important notes that Halloway injects into these chapters ranging from how not to do things in Clojure to the minute differences and implications they hold. Chapter Five dives into the fundamentals and features of functional programming in Clojure. This chapter was especially useful to me as I'm not used to languages featuring things like lazy sequences, caching of results or tail-call optimization. Working through the examples in Chapter Five really opened my eyes to some of the more powerful aspects of FP. Like how an infinite sequence can easily be handled by Clojure and its laziness allows you to only pay for what you need from that sequence. While definitions of infinite sequences are also possible in Haskell or Python, Clojure brings this capability to the JVM (not that anything is preventing a more verbose Java library from handling such structures).

Chapter Three focuses a lot on Clojure's interaction with Java and does a great job of showing you how to rewrite part of your Java project into Clojure and run it on the JVM. This includes calling Java from Clojure, creating and compiling Clojure into java classes, handling Java exceptions in Clojure and ends with the beginning work in Lancet (the build tool the book strives to create using what we learn in each chapter). It also contains a bit on optimizing your performance when working with Java in Clojure. This theme continues through the book as Halloway knows that one of Clojure's main selling points is that it can be so much faster than Java if you're willing to put in the extra work and planning to utilize pure functional programming.

In Java, everything is an object. In Scheme, everything is a list. Well in Clojure, the main staple is sequences which brings us to Chapter Four: Unifying Data with Sequences. While this chapter succeeds in teaching how to load data into sequences, how to consume data from sequences and how to force evaluation of lazy sequences, it felt like one of the weakest chapters in the book. This is all necessary in learning Clojure but Halloway skimps on examples and could stand to add some more examples on what is and isn't seq-able, seq-ing on various things and performing functions on various things.

Multicore chips are all the rage these days. And right now it seems that developers are by and large content with coding single threaded applications. But that may change in the future when the user expects more than a few cores in usage. In the introduction, Halloway argues a few reasons why we all should use Clojure and one of those reasons happens to be the somewhat sound logic that we will all have cores coming out of our ears in the near future. That means that as a developer you have the option to spawn more threads which means coordination of threads which means you will be forced to do the dirty dance of concurrency. Chapter Six is entirely devoted to this and, honestly, I reread a lot of this chapter as there are several update mechanisms and models that you can use to manage concurrency in Clojure. Unsurprisingly there is no silver bullet for concurrency even in Clojure. This book has but a handful of figures and their formatting leaves much to be desired but the two in this chapter are necessary references for deciding if you should use refs and software transactional memory, atoms, agents, vars or classic Java locks. This is a potent chapter that ends with a snake gameimplementation in Clojure demonstrating some basic concurrency. While Clojure protects you from some classically complex issues and may make concurrency vastly more succinct, it still requires a lot of thought and planning. Halloway provides good direction but clearly hands on experience is a necessity in this realm.

Chapter Seven focuses entirely on macros and is somewhat disheartening in that it presents an extremely powerful feature of Clojure that is also very complex. Halloway gives two rules and an exception for Macro Club. The first rule is: "Don't Write Macros." The second rule is: "Write Macros if That Is the Only Way to Encapsulate a Pattern." The exception is you can also write macros if it makes calling your code easier. Halloway does a good job of explaining the basics of macros in Clojure and breaks them down via a taxonomy into categories and examples of macros in Clojure. Macros are a necessity when you're trying to augment Clojure by adding features to it or if you are creating a Domain-Specific Language (DSL). Macros in Clojure do seem easier than macros in most other Lisp languages. At the end of Chapter Seven, you create a basic DSL for Lancet which was helpful even though I was left feeling helpless in the face of macros. Despite the complexity of macros in Chapter Seven, Eight's multimethods are similar to Java polymorphism and was much easier to wrap my head around than macros. Multimethods are used very infrequently (seven times in the five thousand lines that compose the Clojure core).

Chapter Nine is unfortunately less than twenty pages and deals with "Clojure in the Wild." You would think that a book in the series of Pragmatic Programmer would have more pragmatism than the features of a language with Lancet but let's face it--Clojure is a relatively young language. Nine covers automated tests, data access and web development. The automated testing is a short section on Clojure's test-is packaging. The database stuff appears to be little more than wrappers around the already mature JDBC. The web development consists of an intro to Compojure which is similar to web.py and Sinatra. Compojure shows a lot of promise in reducing the amount of code one needs to write a basic web application. It lacks the feature set and support that Rails has with rapidly building CRUD applications but holds a lot of potential to be flushed out into something similarly powerful. Halloway says his introductions to these projects should "whet your appetite for the exciting world of Clojure development" but I think a more accurate description is that these brief brushes with functional projects leaves the reader ravenously blinded by hunger for more.

Some final thoughts on the book: I caught only two very minor typos in the book. It's all English and code. There were no pictures or illustrations in this book except for one on page 96 in which a tiny drawing appears named Joe who asks a question about vectors. Oddly enough, I didn't find Joe on any of the other three hundred pages. It was very easy to work through this book from cover to cover and the example code was very instrumental in my understanding of Clojure. As a Java monkey, rereading sections seemed a requirement although the book is concise enough for me to enjoy in my free time over one week. Halloway cites mostly websites and utilizes tinyurl to reference blogs like Steve Yegge's blog and frequently he references Wikipedia. Only three of his many citations are other printed books (although one of them is Gödel, Escher, Bach: An Eternal Golden Braid). Halloway's greatest strength is the engaging examples (like the Hofstadter Sequence) that he picks and provides to the user and I hope that future editions of the book build on this as well as expand on the growing base of Clojure projects out there. His github is rife with both instructive and pragmatic examples that could stand to be included in a future book.

Some final thoughts on the language: Clojure holds a lot of potential that is yet to be realized. I cannot say yet whether the succinct syntax offers a good balance between quick coding and readability. To the uninitiated, the code can look like a jumble of symbols. Yes, we escape the verbosity of Java and the kingdom of nouns but is what Clojure offers (a neighboring kingdom of verbs) better? While Clojure is concise, it requires a lot of keywords which required a lot of usage look up when starting. Clojure code is potent and powerful. A mere five thousand lines of Clojure code create your engine--the core of the language. I assume this brevity is due to ingenious reuse that Clojure can offer but I would hate to be the person to maintain that code if I was not the author. What's better is that this code is quickly conjured at the REPL if you wish to read it yourself or augment a feature. A sage coworker who has seen much more than I in this business of software development recommended Clojure to me. He was right that it is a very interesting and innovative language but in my opinion it has a long way to go before it becomes the next Ruby or Java. Clojure needs an equivalent to Ruby on Rails and it's fighting an uphill battle against all the developers like myself that left college with so much object oriented coding and so little functional programming (although Scheme is my alma mater's weed out course). If you find yourself stagnating and are thirsty for some continuing education in the form of a stimulating challenge, I recommend Clojure (and this book on Clojure). Hopefully Clojure's full potential is realized by the community and it finds its deserved place in many developer's tool sets as the right tool for some jobs.

Yes, I've heard of Clojure. I have even read this book and enjoyed it. But I am a bit of a dilettante when it comes to programming languages. I've read a good number of the books from Pragmatic and like them. I especially like that they are PDF and DRM-free. Of course, each book is unique in that it is water marked on each page with my name.
I've not used the language myself for anything other than "playing around". I also like Groovy and Scala, which are also JVM based languages which makes it easy to int

Everybody who stays up to date with new languages and platforms. It's a modern Lisp designed for the JVM. As such it's part of the wave of new JVM languages (which includes Groovy, JRuby, Scale, Jython, etc).

I've been looking into Clojure and the parenthesis vs. curly braces thing is one that stuck out to me. Braces define blocks and only blocks of code in most OO programming languages. Clojure's parenthesis enclose lists of data, defn functions (ignore redundancy in that), and practically everything besides vectors. So it's sometimes hard to tell where one logical block stands apart from another. Not to mention, the "recommended" indentation style feels counter-intuitive coming from an OO development backg

It's worth fighting through this initial impression to get to the other side. Lisps syntax is very regular, which makes reading/writing/tooling remarkably easy (once you are familiar). And, almost any kind of expression can go anywhere, encouraging maximal reuse.
Mainstream OO languages, on the other hand, are built using all kinds of special syntax that cannot be composed together. If you and everyone around you have memorized the rules, they seem easy, but they are not simple at all. Consider the idea th

Well, clojure's slightly changed syntax shows that there is a need for more visual structure. Our visual systems do not work like formal language parsers -- not necessarily the simplest (or more "elegant") syntax is the best. Also, the richness and complexity of natural languages indicate that (some) complexity is more natural to us.

Anyway, I tried many time to USE Lisp-like languages (scheme, clojure), but I ended up reverting to other languages. On the other hand, I use now Scala daily and moved almost al

Try Prolog:) . It's like Lisp with syntax.* (Seriously; however, as backtracking is built into the language, it can sometimes become annoying.) There's no really good Java implementation, but they all have Java bindings (the SWI Java interface is especially simple). As both Prolog and Java are garbage collected, there's much less boilerplate code than in a C interface.

One of the nice things about Clojure is Rich's willingness to dispense with old names that have gone stale with time. Clojure has no 'car' or 'cdr' functions. 'cons'(truct) lives on, as its mnemonic is useful.

I actually like car and cdr, because of the equal length of their names and the ease with which you can combine them in various forms to get specific pieces of a nested list, such as caddaddr. However, my approach to programming is that the name of a function is not as important as the language it finds itself in. You could write car and cdr for Basic and it still wouldn't be a good language to write real software in.

In Clojure you would likely use destructuring to dig pieces out of a sequence, avoiding explicit function calls altogether.
Agreed that languages need more than just good naming conventions. In Clojure's case, that 'more' is a careful set of abstractions, including some for modeling time. A good conceptual overview on time, state, and identity is online at http://www.infoq.com/presentations/Are-We-There-Yet-Rich-Hickey [infoq.com].

What meaning does Contents of Address portion of Register and Contents of Decrement portion of Register have for anyone? Only 123 of the IBM 704s were built and sold, and they have long since disintegrated into the dust from which they were formed. The thing ran on vacuum tubes for goodness sakes! IBM sold its last one before Kennedy took office.

What they stand for doesn't really matter that much. You have to name the two elements of a pair something, and as long as the name is arbitrary it may as well be easily composed the way car and cdr are. But if you want to name them something else, that's fine, too.

I just don't think that a language can be judged based on the names it uses for these things. It's not like they're hard to remember if you actually use Lisp and, in that regard, they are no different from a wide variety of other names in v

Your rant is a bit misguided. Java is no worse at multithreading than anything else. The problem is with threading itself ("share everything"), versus multiprocessing ("share nothing").

Java is so heavily used on the server side that I'm pretty sure all new development could stop tomorrow, and there'd still be near-full employment for Java developers just doing code maintenance for the next 20 years.

The only possible way that you could seriously posit that Java is no worse at multithreading than anything else is if you've never actually used anything else in any kind of meaningful way.Java's intrinsic implementation of multithreading is one of hard mutual exclusion, which is awesome in a few ways, but really crappy if portability is your goal. Which it isn't, anymore, as I covered in my first post. The best thing that anyone can possibly say about Java's intrinsic multithread support is that it's ver

Java's strength in a multithreading context is due to the fact that multithreadedness was built into the language, and library authors and programmers have known to explicitly document what libraries are safe for multithreaded use and which are not.

Java the language provides just enough support (the Hoare Monitor and explicit memory model) to enable higher level thread-aware libraries, like the Java 5 concurrency classes.

Using those higher level classes can give you really good support, better than anything

To be honest, I have used lots of other languages to do multithreaded programming for many years - C++, C, and Python most recently. They all use critical sections etc. (ie mutual exclusion, as you mentioned) to hide stuff.

It's why I generally dislike threaded progamming - the "share everything" model is fundamentally wrong, in my opinion, which I mentioned in my original post. It sounds like you agree with this position.

I realise that other languages, like Erlang, use variants of the Actor pattern to excha

And I also realise that Erlang isn't using processes, but instead threads, so I guess it's an exception to my "all languages do threads badly" position. But it's not "typical" threaded programming.

It's not "threading" that you're opposed to. Your actual position (which is very sensible) is that it's a bad idea to mix concurrency and shared memory. Erlang actors don't share memory with each other (at least at the Erlang level), which is why you say threading is okay in that instance. You can write a program

Well, Clojure is a functional programming (FP) language that runs on top of the extremely pervasive Java Virtual Machine

I'll pass.

While FP permits some useful constructs (like Ruby's blocks), writing everything as a FP is a pain in the ass. Combining FP with Java cranks it up from mildly interesting albeit somewhat annoying to full-blown annoying.

Well, had you read a bit more... or even read the summary... you might have noticed that you can interact between Java and Clojure. To the point that -- as is pointed out in the book -- you could isolate packages in Java that have little "side effects" (meaning they are good candidates for pure functional programming) and you could optimize them from a functional standpoint. Which is hypothetically much much easier for some tasks. You do not have to write a whole project in Clojure to take advantage of this. I hope this was clear in my review.

Posting as anonymous because it's a knee jerk reaction.

You can say that again...

I just can't stand the bloat associated with the JVM.

You might end up liking Clojure much more than you think. What if you didn't have any objects so that the garbage collector rarely (if ever) had to run? This is what Clojure could offer you with the platform pervasiveness and audience that comes with the bloated JVM.

My problem is the people who always strive for some divine "elegance". If someone (especially an academic) calls something "elegant" you can be 99% sure that it is pain in the ass in practice.

- Elegant theorems are usually oversimplified -- therefore unusable in practice (at least in the original form)
- Elegant proofs are too compressed -- you are not able to learn from them, because the original ideas and failures of the prover got weeded out. They need a mental rev

It's not just Scala. It's the new mainstream. Java of all things is getting closures in v7 (see lambda-dev [java.net] mailing list for the ongoing discussion of the language changes).

On.NET front, closures have been there for 5 years now (since v2.0) in C#, and for 2 years in VB (!), and they come complete with type inference - where in Ruby you'd write something like xs.find_all {|x| x > 0}, in C# these days it's xs.Where(x => x > 0) - spot 10 differences... And then, of course, LINQ - which is just a fancy

Scala is uber-cool. I use it daily now -- I moved from Java 2-3 months ago.

However:
- tool support is very infant
- documentation sucks
- books and introduction materials concentrate on the shiny new things, not the day-by-day use. This can confuse newcomers as there is a lot of stuff to digest -- but you rarely use (although later you should learn).

As you read through the many comments that will be posted on this article, please keep the following advice in mind: People who spell it LISP are not qualified to judge modern (read: post-1984) Lisp-family languages, probably having 100% of their exposure come from a one-hour lecture and small homework assignment in a programming languages course taught by someone who thought that Common Lisp is a single-paradigm functional language.

"Programming Clojure" is a great book. Any book on a new programming language is going to evangelize the language - it's up to the reader to keep an open mind and decide for herself that it's a good fit for what they do. The book is started to get a bit dated now with Clojure up to release 1.2 now, but is still worth getting in eBook form to get a good understanding of the language fundamentals. I would also recommend the "Joy of Clojure" which is in early-access from Manning. They take a different approach

Both concepts are difficult for typical developers to get right. What the world needs is a revolutionary approach to these two topics such that average developers have the necessary tools to produce quality code for multi-core machines.

Someone needs to take design patterns to the concrete level and create design pat

Some high level languages/ interpreters such as SNOBOL4, the Lisp programming language...

If one is really going to claim that Lisp is no more "self-modifying" than OO, I'd invite you to defend this on Wikipedia.

A good FP design dynamically creates and composes anonymous lamba functions on the fly, myself and professors at Berkeley would argue the very essence of FP. Never have I seen this spelled out as a requirement for OO.

You seem to have this weird idea that lambdas == self-modifying code. It's just the opposite -- lambdas are the alternative to self-modifying code. Lambdas represent the ability for code to create other code, without having to modify any existing code. In fact

Permitting self-modifying code and requiring that are two different things entirely.

The possibility of self-modifying code is entailed in a language which allows code as data. That's not tantamount to self-modifying code either. Compilers take code as data and generate code as output but they aren't self-modifying.

Data formats are essentially special purpose programming languages. That's clear to anyone who's ever looked at a postscript file, but it's also true of things like word processing formats. T

"creating and composing anonymous lambda functions on the fly" is usually referred to as closing over lexically-scoped variables. It is certainly something that happens as a common idiom in functional languages, but it is *not* self-modifying code. It's more like creating an ad-hoc object that holds some data, and has an over-ridden "operator-()" (to use a C++ idiom).Yes, many lisps can and do generate code at run-time. So do several object-oriented languages, like Smalltalk. There are many dialects of

If one is really going to claim that Lisp is no more "self-modifying" than OO, I'd invite you to defend this on Wikipedia.

GP is claiming that FP is no more self-modifying than OO. He is entirely correct - indeed, just looking at the list of languages, most FP ones are not self-modifying. Indeed, even large families of languages that are considered "stereotypically FP", such as ML, are not self-modifying.

There's a good reason for that, too. A large part of FP deals with advanced static typing systems, and it's damn tricky to combine that with dynamic code generation - how do you typecheck, then?

"Functional programming" and "self-modifying" code are closer to opposites than synonyms. Functional programs work with immutable data, eliminating the confusion that ensues when things can change behind your back. Clojure makes functional programming efficient (via persistent data structures) and accessible to mainstream developers (on the JVM). Also, Clojure connects FP to an understandable model for state and time (see http://www.infoq.com/presentations/Are-We-There-Yet-Rich-Hickey [infoq.com]).
People have been taking design patterns to a concrete level for years now, it's called copy/paste reuse. Lisp has had the design patterns problem solved for decades, and Clojure is no exception.

However, the introduction of macros breaks that in some sense. You can create the macro code, pass it as an argument, get a new function in return and evaluate it. Still purely functional? Perhaps, but "looks like a duck, quacks like a duck". It's self-modifying code the way I see it.

Can you point out the "modifying" party in your sequence of actions? Combining code (and data in general) is not the same as modifying it, you know...

It's funny. I once turned in an assignment (in assembly language) that used a small bit of self-modifying code. I thought it was an elegant approach to the given problem.

Back in the days which most/. readers (including myself) haven't seen, self-modifying code wasn't only normal, but effectively required. For example, some older Soviet mainframes had opcodes for array access, where the index was always a hardcoded integer - so, to write a loop that iterates over an array, you had to patch up the command that accesses array el

2: functional programming and self modifying code have nothing to do with one another. functional programming transforms a set of inputs into a set of outputs without reference to any external state. It is a purely mathematical expression. Functional programming languages can be used to write self modifying code, but so can can most languages.

1: if you understand what you are doing, asynchronous programming is easy. All you have to do is prevent screwing up the shared state between threads. Since functional languages have no state to share, you can avoid 99% of the pitfalls of dealing with threading.

I deal with threading every single day. Problems occur when the problem is intractable (and then nothing can help it split), the solution is poorly designed (to much shared state), the language is poorly documented/designed (ambiguous thread safety, inability to be thread safe) or the developer just does not know what they are doing.

Functional programming is good at threading because it eliminates shared state completely. The question is no longer "can I split this, what do I have to rewrite from scratch, and will it be worth it?", but rather "will splitting this be worth the overhead of creating, context switching and synchronizing the threads?"

Abstracting multi-threading to make the syntax easier for inexperience developers is a mistake unless you can also fundamentally prevent most of the issues that make multi-threading a pain in the ass as well. Adding real closures to more traditional languages like c++, java, c# etc would go a long way towards making multi-threading easier to deal with because the vast majority of problematic code in relation to multi-threading will produce a compile error if used within a real closure.

2: functional programming and self modifying code have nothing to do with one another.

This is the equivalent of saying lamda functions have nothing to do with functional programming.

1: if you understand what you are doing, asynchronous programming is easy. All you have to do is prevent screwing up the shared state between threads. Since functional languages have no state to share, you can avoid 99% of the pitfalls of dealing with threading.

Therein lies the rub and I'm glad you put it out there.
Take any class of computer science students you wish and test them on asynchronous programming and synchronous programming. Guess which one will have the lower scores?
Not for all people though, and presumably you are one of those for which asynchronous programming comes natural and easy too. Therein lies the rub.
This concept also applies to functional programming vs imperative and came up in a code review I had recently for some Perl code. The reviewer asked my why I used "for loops" as opposed to "map". I never use map.
I said because it has been proven time-and-time again that people do not understand "map" as well as they do "for loops", especially the side effects in Perl (not truly functional). There is no performance difference either way, but there is a human difference. I argue my code is more maintainable at the expense of a few lines of ASCII text. You have no idea if the person following you finds what you find to be understandable equally understandable and in the case of "map" using a for loop is trivial; the cost to me is nothing and the opportunity for maintainability greater.

> Not for all people though, and presumably you are one of those for which asynchronous programming comes natural and easy too. Therein lies the rub.

It's really easier to write multithreading code that works "most of the time" (than the ST counterpart.) What's difficult is to discover and avoid the potential race conditions; most people just assume it's ok if their 30 functional unit tests pass clean.

This is the equivalent of saying lamda functions have nothing to do with functional programming.

No it's not. The whole point of functional programming is to maintain referential transparency. It is literally impossible to modify a named value, including functions.

Self-modifying code is what you get when you make your imperative, referentially opaque code modify itself. You CANNOT do that in a pure language. Although purity isn't the gold standard in functional programming, it is a goal functional progra

Asynchronous programming sucks big time (at least for programs that are beyond trivial). There is a reason for the idea of "blocking calls" -- it makes code much more readable. Although the asynchronous model leads to faster code.

I earn part of my money by doing Discrete-Event Simulations, which are by their nature much like Actors - everything is done through message passing, and asynchronously (i.e - no blocking calls).

It always make me twitch when people come and try to sell abstractions like STM or Acto

If metaprogramming is self-modifying code, then every compiled program is self-modifying: there is no fundamental difference between translating a program to assembly and translating it to the same language in which it was written; if macros were inherently self-modifying, MACROEXPAND would be useless. If closures were self-modifying, then every C callback system that incorporated an opaque user-data parameter would also consist of self-modifying code.

I am concerned about using this for large software engineering problems about having data and methods scattered loosely about the code because Clojure does not enforce encapsulation. That brings up the old joke about "Chinese programming in LISP": an hour later you will have forgotten what you coded. More recent versions of LISP such as Scheme implement OOP more soundly that the version in Clojure.

Soem of the nice thing in Clojure like the rich set of set-notation are not LISP-specific. There is talk of

I am not sure if you know what you are concerned about, as you use the word 'abstraction' in the title of your post, the word 'encapsulation' in the first sentence, and then conclude with 'OOP'. So I will take them one at a time. (1) Clojure is built around abstractions, there is a rich set of carefully designed interfaces. (2) Clojure's approach to data is certainly not what an OO person would expect. In Clojure, *data* encapsulation is considered folly. However, this must be taken in the context of severa

Clojure's approach to data is certainly not what an OO person would expect. In Clojure, *data* encapsulation is considered folly.

Wait, what? How can data encapsulation be "considered folly", in a language which is quite deliberately called "Clojure" - when closures are, perhaps, the most succinct way to tightly encapsulate data?

A good question. In Clojure, the idiomatic way to do the good part of OO is to use defrecord. Records created in this way do not encapsulate data. The data is immutable, and is directly accessible via public fields or map keyword lookup.
Some other interesting differences from most OO: You *must* program to protocols (interfaces)--records cannot implement methods that are not part of a protocol. And implementation inheritance is forbidden.
Defrecord supports a very iterative approach to development. You c

In my experience with functional languages, the trouble isn't remembering what you coded, but where you put it. Reading functional programming is really easy. You barely even have to understand the operators, as long as you understand that a (mathematical) function is a certain type of relation (a many-to-one relation). Equivalently, a function is the same as a set of ordered pairs with the property that if (x,y) and (x', y') are in the set, and x = x', then y = y'. Equivalently a (computable) function

My biggest problem with the book was that it doesn't explain what exactly the problem is that clojure solves. (Or attempts to solve.)

Why shouldn't I stick with one of the already existing functional languages for functional programming? Why shouldn't I stick with Java (or any of the other JVM-languages) if I need access to the reams of Java classes that already exist? Why not use Erlang if you're thread-paranoid? And so on. Nothing wrong with a new shiny language, but please explain why it is better than t

Concepts
A number of concepts and paradigms are specific to functional programming, and generally foreign to imperative programming (including object oriented programming). However, programming languages are often hybrids of several programming paradigms so programmers using "mostly imperative" languages may have utilized some of these concepts.[24]
Comparison of functional and imperative programming.
Functional programming is very different from imperat

Well the code's there, it's not just going to disappear if Oracle abandons it. And it's free. So if it meets people's needs then there's no reason they should refuse to use it just because it's no longer supported.

Sure, no one knows what Oracle is going to do here except Larry Ellison, but that isn't stopping Java development.

Android?

IBM?

There's already Apache Harmony, and IBM has their JDK. If the Sun/Oracle JVM goes away, it's not the end of the world. As a matter of fact, that would probably be a *good* thing. Java has languished behind C# and.NET in terms of language features (you can argue all you want if they're useful or not), and JDK7 is still a pipe dream

Android does not use any variety of Sun JVM. It uses Dalvik, which runs its own bytecode.

But nobody sane is going to build extensions to Java on their own. Without Sun, there's nobody in charge of developing the Java language. It's going to be a long time before it progresses, and the far more likely result is that it will shrivel and die. Good riddance. Dalvik can be adapted to run other languages, and everyone will move on.

Oh yes, JVM is dying, and that's why JVM for Java 7 will improve HotSpot and will be eventually merged with JRockit - so that you would be able to monitor the one last breath of the dying platform with one of the best VM monitoring systems in the world - and complemented with a superior GC. All that only so that it could feel well while dying. Sure.

Groovy, Scala, Clojure (which I'd never heard of), Jython, JRuby, Beanshell etc. all say bollocks to that. Java 7 also delivers dynamic invocation which is likely to mean even more new languages will appear that utilise the JVM.

If you want radical improvements, go mess with Groovy. Java is a mature language with a mature feature set and doesn't change rapidly for pretty obvious reasons. The invokedynamic thing is anticipated by dynamic languages, but it doesn't stop you from using them right now if you want.

Everything's on its way out. As a society we go through programming languages faster than socks. If you only target "in" languages, you will be spending more time learning than doing. Most languages are "good enough" to get the job done. Stop chasing shiny new features that may improve productivity by 2%, if you are lucky, and when you master it, you'll find it's time to retire anyhow.