"Relatively easy to understand even if you don't know the language" <- We have empirically verified this by showing code samples of quicksort side-by-side in Java, Scala, Clojure, and Eta to ~100 programmers who had no experience with functional programming at an exhibition. Eta won, followed closely by Scala (people just love their curly braces!). One person liked Clojure because of "it looked like English" and no one liked Java. For this reason, we have posted it on the landing page. We have made no claims that it's the most performant nor that it's the fully correct quicksort (accounting for uniques) - the whole point was to highlight the expressiveness.

If I understand you correctly, you basically verified this for Haskell (which you call Eta), since there are no visible differences with Haskell yet.

Why make another language anyway, if your experiment shows that Haskell is readable? You're saying that the problem with Haskell adoption are lack of tooling and documentation, which can surely be resolved without having to design another language. In fact, I don't think bad language design hindered adoption of any language ever (examples: COBOL, APL, MUMPS, BASIC, PHP, Javascript).

I wish you wouldn't break compatibility with Haskell, ever. I think it would make both languages stronger (if they were just one), since everybody could just reuse code. I don't see a reason for having another language just to fix minor syntactic problems.

Edit: I think what I am not clear about is where exactly do you intend to break the compatibility with Haskell and how do you think that action will help adoption of your language.

Check out the FAQ: http://eta-lang.org/docs/html/faq.html. For the record, I love Haskell as a language and even after solving the tooling and documentation problems, it doesn't solve the integration problem. The JVM is widely used and being on top of it makes it easy to integrate into existing systems. You can certainly reuse code between Eta and Haskell given the number of extensions that are in common.

Well, I didn't find the answer to my question in the FAQ. I see where you want to differ from GHC, but I don't see where you want to differ from Haskell standard.

It seems to me that the Eta language is basically Haskell 2010 compliant compiler/runtime (correct me if I am wrong) for JVM.

Which actually is great and useful and I really like that about the project. I just think the project shouldn't be called "another language", but rather "Haskell 2010 compiler with some GHC extensions for the JVM". I actually don't even care if you avoid Haskell in the name, as long as you keep the compatibility.

How Eta's going to differ from the standard is not clear for me either. It's dependent on a lot of factors, such as how GHC decides to evolve and how that affects the usability of the language. The whole point of the name change was to have the choice of not accepting the new features if we felt they weren't ready yet for wide consumption. GHC is a playground for PL research, and SPJ has made that clear on many accounts. It's great for research, but scary for industry adopters. We are focused on industry more than research.

When we say "Haskell" it's difficult to separate it from "GHC's Haskell" which is a much more confusing beast, full of half-discarded research projects, dependency on offshoot libraries, and a ton of tribal culture.

> full of half-discarded research projects, dependency on offshoot libraries, and a ton of tribal culture.

That's a bit harsh. GHC is a pretty old beast and its source isn't as nice as I wish it was (if only imports were qualified and every GHC source file didn't start off with a huge list of imports), but it is hardly as bad as you make it sound. It is pretty darn-well maintained. Heck, it is even one of the examples in "Architecture of Open Source Applications" [0].

> That's a bit harsh. GHC is a pretty old beast and its source isn't as nice as I wish it was (if only imports were qualified and every GHC source file didn't start off with a huge list of imports), but it is hardly as bad as you make it sound.

I don't know the degree to which you want to make it sound bad. I think it's just the reality of GHC Haskell. There are many language extensions and while at any given year a given set are normal, as time goes on that set changes. But in many cases, we don't get new libraries that switch to new extensions, so we end up supporting a rather large superset of Haskell in ghc over time.

So while yeah, the GHC codebase itself is fine, the culture and the community providing the library ecosystem has made things challenging.

What's more, some of the core concepts haskellers lean on are just... I dunno. Underbaked? I've used a lot of lens libraries and conduit libraries now and they always feel a bit undercooked.

So what it looks like you want to remain compatible with the Haskell 2010 standard, with some conservative extension choices. Unless a new standard of Haskell appears that you choose to disagree with, why not just accept that what you're doing is compiler of standard Haskell? I don't see why abiding to compliance with standard Haskell should be scary for industry adopters, on the contrary.

1. So it's a language optimized for the kind of programmers who go to conferences, nothing wrong with that.

2. This goes far above and beyond what most other language designers do, which seems to be "write the kind of language I want to use." Nothing wrong with that, either, but no sense in criticizing the language designers doing more research than most for not doing enough research.

I agree that it's not a good implementation, as I said it's for illustrative purposes. You also don't implement the "Hello World!" program ever in real life, yet every programming language intro starts with that. Relative to that, a sorting algorithm is a lot more instructive. I think if I changed the name of 'quicksort' to 'sortUniqueList' that would clear things up.

I don't have one and I confess that worries me. In trying to teach my children, I have actually grown away from the "this is what the code looks like" to examples. And I commend the site for doing that. 2048 seems like an odd example, but conciseness is the goal, I'm guessing. Still a fun one. (Snake would be more fun for kids, I'm guessing. But that is just a guess.)

2048 was just convenient. It just so happens that there was an existing Haskell project that used the native GLTK library. I just swapped that out with JavaFX and it worked. If I were to write a book on Eta, I would certainly use a simpler game! I personally learned to code as a child by playing with examples and developing a game, so I can bet your kids are going to grasp programming fast. The human brain learns naturally from examples as opposed to theory.

I'm sorry this isn't at all easy to understand if you don't know the language. For example what is $ in this context how does <- work, how does the where clause work precisely and why is it required before the assignments? Is the colon required to separate arguments?

I'm not expecting you to give me a tutorial here in the comments and I'm 100% sure I could read a tutorial and understand in a few minutes but it doesn't seem particularly obvious if you don't know haskell.

For people new to the party, this looks great. For those who had been following the GHCVM project, it's a horrible disappointment.

Basically, this project used to be a fork of the Haskell compiler to support the JVM. Now it's a fork of the Haskell compiler that may or may not be compatible for your use case depending on their opinion of GHC's features.

> For people new to the party, this looks great. For those who had been following the GHCVM project, it's a horrible disappointment.

maybe it's for the best? a ghc-is-the-spec approach to developing a haskell implementation seems... well, perhaps convenient, but less hygienic for the overall language. (that said i don't actually know what eta's philosophy here is.)

I don't think it is necessarily a bad thing if they remain Haskell 2010 compliant and use pragmas for their own extensions. I certainly would like to see features like extensible records and variants; and this is unlikely to happen anytime soon in GHC, as the focus seems to be dependent types.

This is exactly the direction we're going. I'm a big fan of extensible records/variants. We'll be supporting the GHC 8 extensions that are orthogonal to the type system refactoring like Strict, StrictData, etc. pretty much everything other than TypeInType.

I'd say the focus (as always) is on making GHC a better compiler. Join points, levity polymorphism and the new typeable improvements are all in that category. And there's an amazing amount of stuff under the hood in 8.2. Dependent types? 8.6, 8.4 if they're lucky.

i feel like each "new" general-purpose language project is just nipping at the boot-heels of the much bigger problem of constructing large-scale systems that can be fully understood by a few (2-3) people.

VPRI's research has shown that one important method for constructing large-scale systems that can be understood by small teams is to have a pipeline of problem-specific languages that express major portions of the system. they were able to reduce LOC for a typical OS with networking and graphics by 3 or 4 orders of magnitude.

general-purpose languages can't compete with DSLs in terms of expression, and yet we keep inventing them. i think our lack of imagination is starting to show. compilation and language design will need to become much more common-place if we expect to continue scaling up.

a tower of babel in computing is healthy, no matter how much employers want us to be easily-replaceable cogs in an IT machine.

(1) This language has been invented long ago; its name is Haskell. Eta is a JVM backend / target for it.

(2) A good general-purpose language is good at creating pseudo-DSLs, also known as abstractions or APIs. This can include customized syntax, but usually it's not a brilliant idea. DSLs using the common syntax are quite prevalent in Lisp / Scheme / Clojure, or in Ruby. Haskell is reasonably good at creating DSLs, much better than e.g. Java.

I think it might interest you that Haskell-like languages are one of the best languages for programming EDSLs after Lisp-like languages. If I recall correctly, Standard Chartered uses an EDSL written in Haskell that's maintained by ~10 Haskell programmers and the remaining ~90 people program inside of this EDSL for specifying new rules. Similar case for Facebook and their spam filter rules.

The hard part is finding the right abstractions on which to build your large-scale system. When you have those abstractions, you can build a DSL to help you implement your system. Or you can build an API for an existing language. They are two sides of the same coin and I can see arguments for / against either approach.

I think where your criticism is really valid is that they are building tools that they think will help them address the real problem. But it's not clear that they have a real problem yet and that they fully understand it. If they do and Eta is their "DSL" to solve it, then that's great.

I have way too often fallen into the trap of finding (or dreaming about) the right language / IDE / etc. to solve my problem instead of actually working on the problem and gaining a better understanding of it. It's so easy to spend all your time on technical details because it's fun when you really should be worrying about the stuff that pays the bills.

I agree with this personally, but if you read through VPRI's publications you will find that the vast majority of their DSLs seemed to converge to "functional with some special features (which you could implement in Haskell using monads)". This is especially interesting, since in early publications they started with a number of truly different designs.

every framework we use -- library, language, runtime, operating system, hardware -- makes decisions that constrain what we can express.

languages like haskell, lisp, or ruby may make it easy to create new control structures that blend well with the existing syntax, but new control structures don't help if it's still annoying to write, say, very long lines of free-form text -- a problem that xml handles more gracefully than lisp. syntax makes a difference.

i think it's unlikely we'll win a several-orders-of-magnitude decrease in complexity by confining ourselves to the syntax decisions of one language.

One language to rule them all. Personally am rooting for both becoming a success, though Eta may have the easier path given that it can piggy back on the JVM, while Scala Native will have to, for example, come up with a plausible GC solution (i.e. matching JVM's world class GC is a tall task to say the least) along with porting myriad Java/Scala libs to native.

Why do we need this? We've already had Frege (https://github.com/Frege/frege) filling this role for roughly three years now. Seems to me the DRY principal doesn't just apply to personal projects but to public works as well.

I'd hope Eta "just" focuses on compiling GHC's core (not Core) intermediate-representation STG (or C--) into Java bytecode: then, no need to forever catch up re-implementing the ever-evolving language extensions, plus this just gets you all the desugaring / Core-to-Core optimization/simplification roundtrips, custom rewrite-rules, type checking / verification etc pp from GHC. In fact no need for any great checks and verifications, "just" (probably highly intricate in the end) a series of code format transformations.. in theory ;) GHC's various intermediate representations are pretty brilliant and thus I think should be leveraged as much as possible when it comes to new compilation targets and transpilation. Fun fact, seems as of 2002 there was for a while an STG-to-JVM backend.

Now there's issues with this approach too! You'd end up with a massively convoluted set of mappings of Haskell's factory defaults to the JVM's. We're talking the equivalents of `Prelude` module, `base` package, the RTS (garbage collector and much more.. ouch!), and for each and every individual instance deciding where to draw the line between Java's built-ins' semantics and GHC's would be a massive challenge.

What's with GHC's existing LLVM backend, does LLVM not have their own Java byte-code backend(s)? (OTOH, additional major dependencies always kinda bite..)

Eta in fact only deals with STG code -> Java bytecode transformation. I agree that the intermediate transformations are brilliant, so I am very careful about playing around with the frontend bit of the GHC codebase. Eta currently implements almost all of GHC's primitive operations faithfully. I have taken extreme care in preserving semantics. In some obscure cases though, I just gave up since there are no platform-independent ways of implementing certain things (like vectorised instructions).

GHC 8 will take some time. The current position is to prioritize those new extensions which start creeping up in Hackage libraries. There were significant changes to the codebase in 8 and I want to wait until it's stabilized. Eventually, yes.

No, you're right. I just don't want to swap out the frontends right now (again, stability). I'll definitely be cherry picking bug fixes and non-pervasive changes once I get a solid test/benchmark suite setup.

Oh, while the landing page doesn't mention Haskell, the documentation does (it says they don't use the name Haskell in an attempt to avoid the "26 years of psychology and perception that was built around the language").

That seems like wishful thinking. Anyone who knows anything about Haskell recognizes it on-sight. Anyone who doesn't will probably find it fairly radically different from what they're used to in a programming language, and not realize that it comes from a deep tradition with lots of community support.

I can sympathize with the reasoning behind the rebranding. "A new language on the JVM (which happens to be Haskell)" has a slightly different spin than "Haskell on the JVM". The latter sounds like it is targeted at Haskellers looking for new runtime choices, whereas the former is quite suggestive of moving Haskell a considerable step closer to the comfort zone of a certain group of non-Haskellers.

Unfortunately, the "(that happens to be Haskell)" part is completely left out on the entry page. This gives a terrible first impression, because it seems like if someone would be trying to bootleg those 26 years that went into Haskell and sell them of as their own achievement. The documentation does a reasonable effort to set this straight, but the aftertaste from the first impression remains.

Performance of a tiny code sample is not why people select one language over another. If that was the case, no one would be using anything besides assembly or C. Even in other languages, you'll use an abstraction heavy implementation for the benefit of long-term maintenance and code evolution and then performance optimise the bits that can pack a punch. Performance optimisation in any language looks ugly be it C or Haskell.

If I were working on this kind of project, I'd just take code-that-compiles-in-GHC (ie. 1000s of samples --- just put up today for Haskell: https://github.com/metaleap/rosetta-haskell-dump ) and automatically verify the outputs-of-my-outputs match the GHC-compiled ones ... regardless of the actual real-world merit of said code samples themselves :)

You can use JVM debugging tools and with a guide on how to interpret the results for Eta (or a tool that does the mapping to Eta source code for you), debugging can be pretty awesome. The implementation was specifically designed to re-use the JVM Thread stack as much as possible so that you can get stack traces and figure out the source of the error. You'll get stack traces similar to other JVM languages like Scala and Clojure. See http://eta-lang.org/docs/html/eta-user-guide.html#debugging-... for an example. You can use tools like slf4j-ext as java agents and get nice traces - it's how I debug the Eta runtime.

The benchmarks I've run are extremely promising, and Eta is sometimes competitive to GHC after warmup. GHC's aggressive optimisations with -O2 + Oracle JVM's JIT seem to be a very powerful combination. I have avoided publishing the results because I still need to spend more time looking at the numbers and understand why was so fast in certain cases (check up on any mistakes I've made). I'd rather take the time to report proper benchmarks rather than hype unverified results.

Hmmm...could get me back into Haskell (cough--mean Eta). I always hated the Haskell environment. A nice to work with port to JVM could inject some life back into that interest. There was Frege, but it just seemed fringe at the time...we'll seem if this has any steam behind it.

If they had any marketing sense, they would start by telling me why it's better than Clojure. I'm sorry to offend the Scala guys, but if you want to be the best JVM language, you need to improve on Clojure.

I hope what you're really saying is that they need to do a better job of marketing to folks not already familiar with Haskell or similar ML-derived functional programming languages.

And that's probably true.

As for specifically competing with Clojure... frankly, who cares? The two are so different as to render such a comparison fairly useless: Lisp-y s-expressions versus ML-style syntax. Strict Hindley-Milner-style type system vs Lisp-style dynamic typing. Lazy evaluation versus strict. Pure versus impure. The list goes on and on.

Clojure is an interesting language, no doubt. But your response would seem to imply it's the end-all and be-all of programming languages, and given how subjective such an assessment would be, it strikes me as a fairly vacuous claim...

> Clojure is an interesting language, no doubt. But your response would seem to imply it's the end-all and be-all of programming languages, and given how subjective such an assessment would be, it strikes me as a fairly vacuous claim...

As crazy as it sounds, I could imagine myself making such a claim, but only because of Clojure's position as a dialect of lisp.

The reason I think I could make a claim is because of lisp's dedication to simplicity -- technically most general-purpose languages are equivalent, but lisps stand apart for me because they generally focus on giving the programmer the tools to build what they want and nothing else. As a result of this, lisps are generally (if not always): multi-paradigm, DSL-friendly, etc.

This is getting long, but the basic point is, I could see lisp being an end-all be-all of programming languages, because it gives the programmer the right tools, and I can almost always easily envision going from lisp -> some other language (ex. haskell) than I can going from some other language (ex. haskell) -> lisp. You could also of course say similar things about assembly or other languages that give the programmer even less, but lisp strikes what I think is the best balance I've seen for giving just enough power, without requiring too much in return from the user, specifically talking about language ergonomics (leaving out implementation details like garbage collection, etc).

Eh, I've seen that claim enumerable times when reading about Lisps. "You can build any language you want on lisp!"

Yeah, no one does that.

There's a reason other languages exist and Lisp hasn't simply supplanted them. If I want a HM-style strictly typed, lazily evaluated programming language, I'm not going to build it myself on top of lisp. I'm going to find a language that suits my preferences and simply use it.

Besides, you could make the same claim about any programming language. "Hey, C is the ultimate language because I could use it to build a compiler for the language I actually want!" Correct in theory. Meaningless in practice.

As for being multi-paradigm, balance between functionality and simplicity, etc, those are all personal preference. A Haskeller could equally say they like an opinionated language that gives them a rich set of tools to build correct programs more easily. These are factless value-judgements. Which is why HN sees new programming language announcements every other week...

I've also seen and thought of that retort lots of times which is why I made sure to qualify my statements.

What you're saying is true, but there's not that much you can say about languages that ISN'T personal preference, in the end. What I'd really like to hear is from someone that's enjoyed lisps, AND other languages, and feels that another language would be the one on which to make this claim (the claim that a language was the "end-all-be-all" of programming languages).

You mentioned not building it yourself on top of lisp, but that wasn't my point. My point is that if I DID have to build it myself, I would choose lisp as the language to amend, not that it makes sense that everything is built on lisp. That's what makes me think of it as a possible end-all-be-all language for me, and what makes me think I could make that claim about it. I can't think another language that is as expressive, flexible, yet as simple as lisp.

My point was that, knowing and liking languages other than lisp, Lisp is the only language that I could consider making such a claim about. A haskeller COULD easily say they like an opinionated language, and that would totally be their choice, and they'd be right, for them. I simply offered my opinion why I could imagine myself making that claim.

What you're saying is true, but there's not that much you can say about languages that ISN'T personal preference, in the end.

I agree, which is why I said your original comment (that these guys need to prove their language is somehow better than Clojure, because Clojure is, in your opinion, the "best" language on the JVM) was a bit vacuous. ;)

Incidentally, I will say I'm enormously pleased that Clojure has seen some non-trivial success, and I'm happy you've found a language that you seem to enjoy so much. Lisps have a lot to offer the world, and it's nice to see a mainstream, Lisp-derived language running on a modern platform like the JVM!

I happen to feel the same way about projects like this one that are bringing ML-derived languages to the JVM (incidentally, I also happen to be an F# fan on the CLR for the same reason).

And the fact that you could happily use both for different parts of a problem domain in the same project makes me happier still!

You just don't know about it because these languages typically look like Lisp on a superficial level. Those who don't know the second thing about Lisp can't see what has been done, just symbols or parentheses. Nothing visually says "I am lazily evaluated and typed" or whatever.

Many Lispers don't want to build the language they want, simply because that language is Lisp.

We are talking about the JVM. And as such, the existing competitors are Java, Scala, and Clojure. Scala had promise, but many significant users discovered warts on large projects. And on top of that, Lisp is not only sexy (rationally or irrationally), but Clojure has a lot of intellectual momentum. Further, we have Clojurescript for nodejs. It is the one to beat.

So if someone is going to sell me on a better JVM language, they need to at least give me a pro/con list vs Clojure. While HN might disagree, the market (people who work in companies that live for revenue, not VC) will agree in my interest for that comparison.

I looked at Clojure but the lack of static types and thus losing the benefit of refactoring always put me away. Out of curiosity - how do you safely refactor Clojure code ? Safe Refactoring offered by type safe languages is such a major aid to redesigning code that it has become difficult for me to work in dynamic languages for any large code bases.

I've been a full-time Clojure dev for about 4-5 years now. And the number of times you have to refactor (in the traditional sense) Clojure code is pretty rare. And the reason is that Clojure leverages it's use of immutable data structures, namely hash maps.

So let's say I have a function that needs :name and :addr and it adds a new field calls :name+addr. Now in most (all?) statically typed languages I would have to say that this function takes a "name and addr" and returns "name, addr, and name+addr". So I have a type conversion, right? If the time comes that I need to modify these types, I have to do some sort of refactor.

In Clojure we'd just take a hash map and add a new k/v pair to the hash map we get. Any hash map will work, as long as it has the proper entries, and we'll just add a new entry to that. So if the time comes that I want to call this function with a "company" instead of a "person", it just works, as long as I have the proper keys. And to help check this sort of stuff we do have spec (https://clojure.org/about/spec).

TL;DR -- my belief having programed in both static and dynamic languages is that deep refactoring is primarily driven by the inflexibility of static types.

Clojure and Haskell are two almost completely different beasts. Thanks to clojure's simplicity (as a dialect of lisp), it could BECOME haskell if you wrote the DSLs for it, but the reverse is much harder, I would think.

Eta is not looking to convince clojurists to use haskell on the jvm instead of clojure, it's (most likely, I don't maintain the project) to enable those who want to use haskell but want to also use the JVM to do so.

Regardless of whether Haskell is a better choice for your project or clojure is, the eta team (or any other team for that matter) is under no obligation to find that out for you, or even to make your search any easier. That's your job -- if clojure still fits your needs and doesn't have any glaring issues, absolutely use it.

Yes -- to clarify, it I'm saying that it would be easier (for me at least) to write with haskell's semantics/paradigm than it would be for me to write with lisp's semantics/paradigm in haskell.

I think the statement is almost axiomatic. Lisp is multi-paradigm, by way of simplicity/choices not being made for you. Haskell is decidedly functional (which is something I love about it). Making a more specific thing into a less specific thing (while possible in this case), seems like a more difficult task than molding the clay that is LISP to look like haskell.

Lisp is sexy, but parenthesis are a really hard sell for the wide crowd. It sounds silly, but that's the reality. If they had forced people to learn & use Lisp instead of C, the world might be a very different place. I personally love them even now when I look back at Clojure code.

Clojurescript is just wonderful. I'm not even sure a Haskell-like language that compiles to JavaScript (Elm, PureScript, GHCJS) with really good tooling can beat it, though I haven't tried. Would be an interesting experiment to do a comparison by writing the same app in all the languages.

Thanks for bringing this up! I will definitely add a section for explaining the benefits for Scala and Clojure devs. I shipped a non-trivial Clojure web service to production that was handling 1M/requests a day. The experience is what made me finally understand that the JVM is wonderful platform, and what prompted me to work on Eta, given that Frege was insufficient for what I wanted (not supporting a lot of useful GHC extensions). Clojure was wonderful in the fact that I could become as concise as I wanted via EDSLs, and the immutability by default + concurrency primitive were a joy. The lack a type system (please don't mention core.typed) and compile-time errors at runtime were really a pain. I'm sure you can ameliorate that with libraries like prismatic/schema (which I did) and strong test discipline, but it's great when the compiler just takes care of all of that for you. Once you really "get" Haskell-like languages, you miss the benefits no matter what language you try. As for Scala, I haven't built anything significant and I wouldn't want to for some of the examples I've seen. It's too verbose for a typed functional language.

I find that you still need the same tests using a statically typed language that you would need using Clojure. The reason being that the type system does little to ensure semantic correctness.

For example, consider a sort function. The types can tell me that I passed in a collection of a particular type and I got a collection of the same type back. However, what I really want to know is that the collection contains the same elements, and that they're in order. This is what you really care about at the end of the day, does the function do what I intended.

This is difficult to express using most type systems out there. You could use dependent to express that, but it certainly wouldn't be something that you get for free with Haskell.

So, you'll still have to write tests to ensure that the function is semantically correct for anything non-trivial.

It's also worth noting that Clojure Spec lets me express exactly what I care about using the same language semantics I'm already using to write regular Clojure code:

The specification will check that the arguments follow the expected pattern, and that the result is sorted, and I can do an arbitrary runtime check using the arguments and the result. In this case it can verify that the returned items match the input.

I should mention that a good testing discipline is required in Eta as well but you would focus your testing on high-level properties using frameworks like QuickCheck. Moreover, you don't lose as much if you don't write tests like you would in dynamic languages like Clojure.

Thanks for the example! My point was not that you can't test effectively in Clojure, but just that doing things like refactoring is much easier in Eta because of the numerous static checks.

My experience is that you just use different strategies for structuring your code. Static typing allows you to write monolithic projects that have lots of internal interdependencies.

This is where the ease of refactoring with static typing comes into play. An argument could be made that static typing facilitates writing code that has a lot of coupling between components.

Dynamic typing forces you to break things up into independent components much more aggressively. I think that it's a very good thing. Low coupling allows for higher code reuse, and reduced mental overhead when reasoning about code.

If you look at the Clojure ecosystem, most of it consists of small focused libraries that solve a particular problem.

Conversely, projects are structured using small independent modules that implement a particular piece of functionality. These modules can be tested at the API level.

When I make any changes, I run the API tests and if those pass, then I know that the modules does what it's supposed to. Now we also have Spec that makes it even easier to express constraints and document intent of the code.

I've never found the need to do TDD or have unit tests when working with Clojure. When I'm developing new functionality, I use the REPL, and I know exactly what my code is doing at any time.

Once the functionality is implemented and the code is doing what I need, I turn the REPL session into tests at that point.

"Dynamic typing forces you to break things up into independent components much more aggressively."

I have the opposite experience with Haskell vs. Common Lisp or Python, but possibly because Haskell is purely functional, while CL and Python are imperative. I tend to write smaller functions in Haskell, where in CL and Python I create bigger functions with intermediate variables and bindings.

Another reason is probably absence of keyword arguments in Haskell which kind of forces you to make functions in Haskell to do only one thing and not too many things.

Yeah, I definitely think that the functional style coupled with immutability plays a huge factor.

However, I'm not referring to writing shorter functions in that comment, but rather about higher level components like namespaces.

Refactoring becomes painful when a particular piece of data is used in many parts of the application. When you change the shape of that data, then you have to make sure you update every place that uses it. This is where static typing can help ensure that you didn't miss anything in your refactoring.

About two years ago, before Haskell became my current favorite language, I really liked Common Lisp and Python, and I was transitioning to Clojure. So perhaps I can explain what sold me to Haskell from Clojure and Lisps in general. (I also looked at Scala, but I am a little worried about it being too complicated by wanting to be mix of OOP and FP.)

What I really like about Lisps is the ability to define DSLs with functions and macros. I thought Haskell cannot do it very well, although I recognized that the most useful ability of macros - to selectively evaluate arguments - can be replaced with Haskell's lazy evaluation. Then I read the article http://www.haskellforall.com/2012/06/you-could-have-invented... and I became sold instantly. Not only Haskell can define DSLs that are on par with DSLs in Lisp, but it also gives you typechecking in those for free!

There are still some use cases where Lisp macros can be stronger than standard Haskell, but not that many.

The other thing I was worried with Haskell was the static typechecking. I am fan of dynamic languages, because they let me write the code without having to type types everywhere. But in practice with Haskell, I was pleasantly surprised; if you stick to the standard things, you don't really need to specify types that much (although I try to specify them for function arguments), Hindley-Milner inference will figure it all out. Specifying types upfront is still a trade off, but I found I don't really mind it if I don't have to write them everywhere as I have in Java or C.

Haskell also isn't complicated language, it turns out, most features are either some obscure typing extensions to HM which you will avoid as a beginner anyway or just syntactic sugar over some kind of function composition.

And I really like the functional approach, such as monadic representation of side-effects (which gives you ability to reinterpret actions done by some function differently). It's a powerful paradigm I think even though I am not very good at it yet.

I even joke that it's simpler than Java, because the Java people have to be really smart to work with all those objects and patterns, where I can make do with types and functions (and an occasional type class), and I am not so smart, so I prefer Haskell. (There is truth to it, IMHO, the relatively clean and abstract design of Haskell makes many high order things more straightforward than in Java.)

There are plans to make the compiler self-hoisting once it's more stable. The little bit of C used is just to turn on an option in the GHC runtime for profiling purposes. That would be replaced with some Java code to turn on an option in the Eta runtime if it were self-hoisting.

Laziness by default does not mean you can't have strict primitive types, which Eta does. Moreover, if you ever want to avoid object references in performance-sensitive contexts, you can allocate off-heap memory and work directly with that via the Ptr mechanism (which is backed by DirectByteBuffers). GHC 8 recently got compound values with an extension called UnboxedSums. It would be tricky to implement on the JVM though in Eta, but not impossible.

OTOH, most of the things I actually use laziness for in Haskell do not fall into the `foo lazy -> bar` category, because they involve things like floating out IO actions or renaming things, and in fact, changing the type makes them more cumbersome e.g.

if x then error "bad" else thing

to

let z = error "bad" in if x then z else thing

Which isn't a valid transformation in any strict language. This general idea is pervasive in the code I write, where the act of binding something is immaterial to its evaluation. I think we use this style a lot more than we give ourselves credit for in Haskell. You can of course wrap this in a thunk, and some of the usage style can be approximated by a monadic type. But this is all just really cumbersome and annoying to do pervasively. It's the best benefit I get from laziness, to structure code this way. You also end up duplicating a lot of strict-vs-lazy code either way you pick, since "crossing the streams" is generally either forbidden by the type system (in your example) or you need the different implicit characteristics (like in Haskell). It's not really clear to me this is a win overall.

I'm not opposed to strict languages, but IMO, I think if you want a strict language, you're better off just forgoing the whole thing, and using lambdas (thunks) where needed for small delays, and a good macro system to define control structures for everything else rather than trying to shoehorn laziness into your types or whatever. Random thunks you occasionally need aren't really the benefit. Being able to decouple definition from evaluation is.

In any language, not just a lazy one, “let x = v in t” is beta-equivalent to “t[x:=v]”, whenever “v” is a beta-equivalent to a value in the language. Of course, in a call-by-need (or pure call-by-name) language, every term is a value. In a call-by-value language, some terms are not values (and this is a feature).

Yeah and it also allows addressing beyond 2GB. I've been using DirectByteBuffers for the sake of being compatible with Android which I think doesn't give you access to the Unsafe API probably. I'll see if I can add a compiler option to use Unsafe for the JVMs that have it in the future. That'll probably do wonders for performance since Unsafe APIs are intrinsics. Unsafe is currently used in the Eta RTS for atomic CAS operations (which should be compatible in Android) on some of the RTS data types.

I really don't understand the need to squeeze as much functionality as possible out of as few lines as possible, that the code makes no sense no read. How am I supposed to parse the cryptic beginning example which shows the 2048 game being played? It looks like a ton of stuff has just been hidden way, which is fine for the example but likely all falls apart when you try to add something new to the game.

I'd argue the exact opposite: what we don't need right now is yet another language that most differs from its peers by aesthetics and engineering. If a language doesn't offer you something conceptually new, it might have been better offered as a library.

You can, see https://github.com/rahulmutt/eta-2048. The whole point of that example was to demonstrate that you can build nontrivial things with Eta right now. Just have to deal with Java FFI boilerplate which requires some patience without any IDE support.

Throughout eta-lang.org we use a custom syntax highlighter we developed for CodeMirror (used in the playground) and Pygments for the static code examples throughout the docs. We will translate these to other platforms like Emacs, Vim, Eclipse, IntelliJ, etc. once IDE support is better.

Its amazing that if you put down all the languages that consider themselves powerful and scalable you somehow reach what religion is...everybody thinks their god is the only one, and if one indeed exists all the rest are wrong by default!

Technology is fair and there is a reason why haskell is not used apart from niche or academic projects. If you want to use jvm to deliver functionality for your users just a "boring" technology. No user, never, cared about the programming paradigm.

You would be correct if we branded it as "the" powerful language, but we used "a" meaning it's one of many for you, the programmer to chose. I have practically found Haskell-like languages to deliver lots of value in business settings and I want to bring those advantages to everyone who would like to take advantage of it, reducing the risk of adoption as much as possible. No user ever cared about the programming paradigm, but I think they would be happy if bugs were fixed faster (or not present in the first place) or if new features were added without breaking existing features.