Thoughts on some upcoming programming languages

It's interesting because I've had pretty much the opposite impression of Haskell: I really love immutability and laziness.

Laziness may be a performance issue, but it fits the way I think better, and I would prefer expressiveness over performance by default. I'm constantly happy that I can easily work with infinite structures and that I don't have to worry about calculating more than I need--my code does exactly what I want much more neatly than in other languages. Laziness is also an almost magic answer to some of his gripes--you don't need a break in a fold because you will only ever calculate the part of the list you use! I really love this.

Also, regarding the loop: I don't see why a simple recusive function wouldn't do if you need some very complicated iterative behavior. I've found recursion in Haskell--largely thanks to pattern matching--to be much neater than loops in other languages. While a fold or zip or whatever is always my choice if applicable, I have also written some fairly complex mutually recursive functions almost without noticing. If anything, I've found this easier than writing complex for or while loops.

Immutability is another thing I really like. I think I've found a good way to describe it: in Haskell, you can write code with mutable state. But you only ever do it for performance or to model mutable state; you never do it for flow control. This, like laziness, lets me worry less--I can pass all sorts of structures around without caring whether I accidentally update the wrong list. I've worked on some similar projects in both Python and Haskell, and found the pervasive mutability in Python annoying: I couldn't just pass a list into a function and use it, I had to make sure it copied correctly and so on. This is even worse in Java with its dysfunctional clone method; in Haskell, it just works.

Really, both laziness and immutability--much like the type system--just take away stress. I can express my ideas more simply and more directly because I don't have to worry about computing too much or messing up state anywhere (or, by analogy, making stupid type mistakes).

I've also found that I use the Maybe, List and Either monads the most--even more than IO. Functions like foldM with Maybe or List are immensely useful, but I use them simply as well. Most of the IO stuff I do could be expressed as an Applicative rather than a monad. Of course, my IO has been simple--mostly REPLs and command-line interfaces, so I don't have the best perspective here.

Now, Haskell does have some issues. Record syntax is a big one. My solution so far has been simple: simplify my data types as much as possible. I would much rather have some very simple types and more complex functions than vice-versa. However, this is going to depend on your project; I understand. However, I also think it depends on mindset: if you approach the problem like an OO programmer, you're going to want to build up more complicated nested types and so on. I've found that divorcing what the data is from how it behaves and then only focusing on the former very helpful here. Of course, the majority of the stuff I've done has been related to programming languages (okay, all of the stuff I've done so far :)), so perhaps this is an artifact of that particular domain.

The standard library is also annoying--insufficiently polymorphic functions (map) and partial functions (head, tail and so on) are unfortunate. I also don't like the numeric type heirarchy--there is no Natural type, and no good way to add one and a bunch of standard functions use Int when they should be more general. (Functions like genericLength shound not be needed!) Luckily, this is part of the language we as users can change by using a different Prelude so it isn't all bad, although this has some downsides as well.

It also has a bit more syntactic sugar than I would like--we don't really need if statements (they could just be a function) and negation is confusing and a bit of a hack. Negative literals are overrated anyhow ;).

I've also had issues with Cabal. I love it as a build system, but using it to install complicated packages is not fun. I'm hoping switching to Arch will help, as soon as I figure out how to configure RAID0 with their installer :). Having a real package manager with more Haskell packages than Yum will be really great, I hope.

So, despite its weaknesses, I think Haskell is a much better choice than any alternative language I have seen. I have written some very similar projects--most dynamic language interpreters--in several languages now: Haskell, Scheme, JavaScript, Python and a pathetic imitation of Lua (for a class we first implemented a simple language vaguely akin to Lua then used it to implement Prolog). All of these projects are very similar, so I am in a good position to judge these languages at least in this context. Scheme was very good; JavaScript was not bad; Python was fairly annoying and the Lua-like language even more so. However, the most pleasant was Haskell, by far, at least partly because of immutability and laziness.

I think there are some very interesting languages to learn on the horizon; particularly, I think a good dependently typed language is exactly what I need. I also suspect that for most code, a total functional language would be nice--it would remove another potential source of stress, although this time at the expense of power. However, as far as general-purpose, Turing-complete languages go, I still think Haskell is basically optimal.

This was an absurdly long comment, but it helped me organize my thoughts on the subject of Haskell, so it was worth writing even if nobody reads it.

It's interesting because I've had pretty much the opposite impression of Haskell: I really love immutability and laziness.

I love immutability & language level support for it, too. But I want "first-class" language support for mutability as well.

Laziness may be a performance issue, but it fits the way I think better, and I would prefer expressiveness over performance by default. I'm constantly happy that I can easily work with infinite structures and that I don't have to worry about calculating more than I need--my code does exactly what I want much more neatly than in other languages. Laziness is also an almost magic answer to some of his gripes--you don't need a break in a fold because you will only ever calculate the part of the list you use! I really love this.

Yes, I understand the advantages of laziness. Have you ever encountered a space leak? I sometimes feel like they are the null-pointer-exceptions of Haskell. Sometimes they don't just make the code just a bit slower but unbearably slow or sometimes all your memory is eaten up. And is laziness by default really needed for infinite structures? What about streams (e.g. in Scheme or Scala)? If i remember correctly you can achieve infinite structures (without computing all of them) with generators in Python. Regarding the fold: Yes, there are a lot of cases where a fold is ideal - I don't disagree with that. The point I wanted to make is that there are use cases where I think traditional loops are clearer.

Also, regarding the loop: I don't see why a simple recusive function wouldn't do if you need some very complicated iterative behavior. I've found recursion in Haskell--largely thanks to pattern matching--to be much neater than loops in other languages. While a fold or zip or whatever is always my choice if applicable, I have also written some fairly complex mutually recursive functions almost without noticing. If anything, I've found this easier than writing complex for or while loops.

Yes, recursion in pattern-matching-supporting languages is more elegant than in others. I still think there are reasonable cases where a while or a for loop is at least just as clear or clearer than recursive functions (or the composition of several combinators) but your mileage may vary.

Immutability is another thing I really like. I think I've found a good way to describe it: in Haskell, you can write code with mutable state. But you only ever do it for performance or to model mutable state; you never do it for flow control. This, like laziness, lets me worry less--I can pass all sorts of structures around without caring whether I accidentally update the wrong list. I've worked on some similar projects in both Python and Haskell, and found the pervasive mutability in Python annoying: I couldn't just pass a list into a function and use it, I had to make sure it copied correctly and so on. This is even worse in Java with its dysfunctional clone method; in Haskell, it just works.

Really, both laziness and immutability--much like the type system--just take away stress. I can express my ideas more simply and more directly because I don't have to worry about computing too much or messing up state anywhere (or, by analogy, making stupid type mistakes).

Now, Haskell does have some issues. Record syntax is a big one. My solution so far has been simple: simplify my data types as much as possible. I would much rather have some very simple types and more complex functions than vice-versa. However, this is going to depend on your project; I understand. However, I also think it depends on mindset: if you approach the problem like an OO programmer, you're going to want to build up more complicated nested types and so on. I've found that divorcing what the data is from how it behaves and then only focusing on the former very helpful here. Of course, the majority of the stuff I've done has been related to programming languages (okay, all of the stuff I've done so far :)), so perhaps this is an artifact of that particular domain.

The thing is the record syntax is so bad that you feel the pain even for the simplest records. At the end of the article I pointed out that Haskell and the likes are still of course great especially for the tasks functional languages are naturally good at. It happens that I also implement some tiny languages in Haskell and it was great. I didn't even use records. The only problem I had were space leaks again :). But in other projects Haskell weaknesses are more prevelant imo.

The standard library is also annoying--insufficiently polymorphic functions...

I've also had issues with Cabal. I love it as...

Fully agree

So, despite its weaknesses, I think Haskell is a much better choice than any alternative language I have seen. I have written some very similar projects--most dynamic language interpreters--in several languages now: ...

Yes, as I pointed out above I agree that Haskell and the likes are almost ideal for implementing languages when it comes to the elegance (& robustness?) of the implementation.

I think there are some very interesting languages to learn on the horizon; particularly, I think a good dependently typed language is exactly what I need. I also suspect that for most code, a total functional language would be nice--it would remove another potential source of stress, although this time at the expense of power. However, as far as general-purpose, Turing-complete languages go, I still think Haskell is basically optimal.

But have you ever programmed in such a language? I haven't but I don't believe the improved static guarantees come without their price.

This was an absurdly long comment, but it helped me organize my thoughts on the subject of Haskell, so it was worth writing even if nobody reads it.

Yes, I understand the advantages of laziness. Have you ever encountered a space leak? I sometimes feel like they are the null-pointer-exceptions of Haskell.

Perhaps you're still not experienced enough, to get a feeling where to add strictness. I'm still a Haskell noob, using it for about a year, but looking back at my experience with C++, what felt hard at the beginning and now isn't a big issue anymore. But still from time to time I might get an issue and it might take hours to get by.

Some things are just hard and I don't think that you can make it lot easier without getting issues at other places.

And is laziness by default really needed for infinite structures? What about streams (e.g. in Scheme or Scala)? If i remember correctly you can achieve infinite structures (without computing all of them) with generators in Python.

Without laziness you have to handle infinite structures in a special way. The function take in haskell doesn't care if it gets a finite or a infinite list. In python you would have to write two ones, one for the list and one for the generator, or you're writing one for an iterator, but that's only possible if you're only iterating over the list.

The nice thing about laziness is, that the consumer doesn't have to care if the data is infinite, and the producer doesn't have to create the whole data in front. That's a nice property for modular, independent systems.

The thing is the record syntax is so bad that you feel the pain even for the simplest records.

Yes, nested records suck. But I think that lenses make it a lot less painful.

Perhaps you're still not experienced enough, to get a feeling where to add strictness. I'm still a Haskell noob, using it for about a year, but looking back at my experience with C++, what felt hard at the beginning and now isn't a big issue anymore. But still from time to time I might get an issue and it might take hours to get by.

It certainly is true that experience helps (doesn't it help with everything?) in finding & eliminating space leaks. However, as pointed out in the article, even the most experienced Haskellers have to fight with space leaks from time to time. It seems some of the (performance) implications of laziness are so intricate that even experience can't save you from them.

Some things are just hard and I don't think that you can make it lot easier without getting issues at other places.

This statement is by itself true but here it seems to me a little apologetic, if you allow me this interpretation. Anyway, I think the important question is if "getting issues at other places" is better on the whole than having laziness by default or rather if abandoning laziness by default is a net-gain even if that means that there will arise other problems.

Without laziness you have to handle infinite structures in a special way. The function take in haskell doesn't care if it gets a finite or a infinite list. In python you would have to write two ones, one for the list and one for the generator, or you're writing one for an iterator, but that's only possible if you're only iterating over the list.

Fair point. But is this syntactic & semantic consistency woth it? Maybe it is. Although Haskell itself doesn't seem to place value on such consistency. There are almost always pure and monadic duals of functions, e.g. map & mapM or standard library inconsistencies like fmap & map. So, I'm remaining skeptical if this implication of laziness is all that important.

The nice thing about laziness is, that the consumer doesn't have to care if the data is infinite, and the producer doesn't have to create the whole data in front. That's a nice property for modular, independent systems.

Fair point. But what about the whole saga of enumerators, iteratees and what not? For pure code your point may stand, however, it seems that practically, laziness does not fulfill this promise. Correct me if I am wrong, but this is the impression I got.

Yes, nested records suck. But I think that lenses make it a lot less painful.

True. As I see it, there is a problem still: You first have to understand the concept of lenses. Ideally, you would not have to think about it at all. And using the various lenses libraries increases the syntactical overhead. If I remember correctly, for some libraries you have to adhere to their naming conventions and use Template Haskell and understand how to use the library. All for the tiny little wish of using records sanely.

You might ask why we need a separate map function. Why not just do away with the current list-only map function, and rename fmap to map instead? Well, that’s a good question. The usual argument is that someone just learning Haskell, when using map incorrectly, would much rather see an error about lists than about Functors.

Haskell is used in universities and other contexts for pedagogical reasons, and maps and folds are one of first things they teach. It's probably helpful if their error messages don't mention Functors and Foldables.

I knew about this argument. I acknowledge it although I don't like it.

You have pointed out another instance where Haskell does not see consistency as the absolute end-goal. Here for example, consistency is traded for better error messages. This helps back up my argument in the other comment

It certainly is true that experience helps (doesn't it help with everything?) in finding & eliminating space leaks. However, as pointed out in the article, even the most experienced Haskellers have to fight with space leaks from time to time. It seems some of the (performance) implications of laziness are so intricate that even experience can't save you from them.

The point is, how often does it happen for the experienced Haskeller? Once a week or once in a year?

Fair point. But is this syntactic & semantic consistency woth it? Maybe it is. Although Haskell itself doesn't seem to place value on such consistency. There are almost always pure and monadic duals of functions, e.g. map & mapM or standard library inconsistencies like fmap & map. So, I'm remaining skeptical if this implication of laziness is all that important.

Because there're other inconsistencies, we shouldn't care about consistency at all? I don't think that's your point, but your argument goes that way.

I think that Haskell is one of the most consistent "almost" mainstream languages out there.

Fair point. But what about the whole saga of enumerators, iteratees and what not? For pure code your point may stand, however, it seems that practically, laziness does not fulfill this promise. Correct me if I am wrong, but this is the impression I got.

The point is, how often does it happen for the experienced Haskeller? Once a week or once in a year?

A valid objection. If it only ever happened seldomly I doubt there would be so much controversy in the haskell community with regard to laziness. That's my impression anyway.

Because there're other inconsistencies, we shouldn't care about consistency at all? I don't think that's your point, but your argument goes that way.

You are right, this is not my point. I only said that the consistency-gain for some functions enabled by laziness is perhaps less valuable than the perfomance & space reasoning problems that accompy laziness. To support this idea I pointed out that even Haskell does not value consistency as much.

I think that Haskell is one of the most consistent "almost" mainstream languages out there.

You could argue that Java is a very consistent language, too, because you can only ever have classes. That doesn't mean I don't agree with you but it is a claim that should be backed generally.

A valid objection. If it only ever happened seldomly I doubt there would be so much controversy in the haskell community with regard to laziness. That's my impression anyway.

Where does this controversy come from? Novices or people with experience? You cite the Simon Marlow case but how often does Simon Marlow hit a godawful space leak compared to a novice? A thousandth of the time? Given that, who's more likely to complain, and where do the bulk of complaints lie? Novices or more experienced users?

There are a lot of factors here. But to make it simple, if every experienced Haskeller hit an intractible space leak every week, then maybe there'd be a problem. I see little to no evidence that this is the case, by the large it is novices who hit them with any regularity whatsoever in my experience. If anything, the performance problems of an experienced Haskeller almost always come down to GHC's ability to inline or do code fusion, generally speaking. This can be difficult to predict and account for - especially inlining. It also requires far more fine grained analysis and tuning, but Criterion helps a lot. Memory usage is another factor, but this is generally orthogonal to laziness (and has to do with unpacking data types for example to save indirections and leverage your caches better.)

(NB: Even then we can only say it's a correlation, because having a leak doesn't mean you have to complain.)

There are valid arguments for not having laziness by default, and I think we need better performance tools, and we're getting there, but I have been writing Haskell for years, and there is a completely crucial point to me: the cost of space leaks is so overwhelmingly dwarfed by the modularity it enables in my programs that it is more than a worthy trade off in my experience.

Laziness is the reason Haskell programs can be so well abstracted: it means we can constantly 'pull out' subexpressions into separate terms with no affect on program semantics. Because laziness is crucially tied to purity, this is basically a form of equational reasoning and abstraction. These transformations are not valid in the general case in a strict language. As a result you sometimes contort your code to make it work, or worse duplicate it.

Laziness makes abstraction considerably easier and more general, with a far lower cost on average in terms of amount of code and maintainability - as a programmer, I value that far beyond many other concerns, including hitting the rare space leak.

EDIT: I'll also add a tip up front for optimization purposes: you should pretty much always look at optimizing your data types before anything else. Unpack composite fields and strictify that which you can afford to be strict - many data types do not semantically rely on their components being lazy, so this is fine. This is where the bulk of any of my 'optimization' goes - I rarely find myself reaching for seq or bang patterns in regular code.

It certainly is true that experience helps (doesn't it help with everything?) in finding & eliminating space leaks. However, as pointed out in the article, even the most experienced Haskellers have to fight with space leaks from time to time. It seems some of the (performance) implications of laziness are so intricate that even experience can't save you from them.

In the end you have to pick your poison. No approach is going to be perfect, but in Haskell it's much easier to write correct code than in imperative languages. So, once in a while you'll have a performance issue, and something might turn out to be awkward, but you have to balance that against all the problems that you don't have, such as data consistency, and complicated state management that you have in imperative languages.

A lot of what Haskell offers is there exactly because it only offers immutability, once you add mutability to the mix all the guarantees go out of the window. And in my opinion it's not a good trade at all.

No approach is going to be perfect, but in Haskell it's much easier to write correct code than in imperative languages.

I agree with your point if "imperative languages" is replaced with "current imperative languages".

A lot of what Haskell offers is there exactly because it only offers immutability, once you add mutability to the mix all the guarantees go out of the window.

I don't agree with this point, especially the part that "all the guarantees go out of the window". In this discussion and in the article the language Disciple was mentionend. Although a research project it is able to allow mutability and side-effects in a controlled manner seemingly without much syntactic overhead. I'm pretty sure there are other such projects (if you look through this site you'll certainly find something).

By the way, Haskell does offer mutability through various Monads (ST, IO...) or even unsafe function calls and at least in some cases (like the ST monad) mutability can be used without loosing guarantees.

Anyway, I'm not argueing that Haskell is bad (the article didn't make that claim either). I know it's good. Here and in the article I wanted, among other things, to discuss the viability of the, in the article mentioned, languages as a better alternative to Haskell and other currently available languages.

I agree with your point if "imperative languages" is replaced with "current imperative languages".

That qualification is meaningless, because it's the nature of mutable data that makes it hard to reason about. There is no magic out there that can change this fact. When data is mutable, then any time you make a change you have to be aware of the complete state of the program and what that change might affect. With immutable data every change is implicitly contextualized and cannot affect anything outside its scope. This allows you to reason about parts of the program in isolation.

The point I'm making is that as soon as you're in mutable land then you have all the problems associated with mutability. If you mix mutable data with immutable data, then the problem spreads to all your data.

Disciple and monads in Haskell allow you to isolate mutable data, but internally it's just as problematic as in any other language. So, best you can do is minimize the amount of mutable data and clearly mark it as such, which is basically what Haskell encourages doing.

Minmizing mutable data/state and ideally marking it as such is very important and that's one of the lessons I've learned from Haskell.

Your argument though seems to paint mutability as one-sided evil. Why do I have to be aware of the complete state of the program when I update mutable data? What about mutable data that is only ever accessible inside the scope of a function and does not leak outside (see ST). What about mutability to allow caching? Why is the mutable land so sinful for you?

I think shared mutable data is a problem. See for example linear types or uniqueness types or some approaches in Rust regarding mutability. You can have it (at the very least in principal) without suddenly hardly being able to reason about the program anymore.

Your argument though seems to paint mutability as one-sided evil. Why do I have to be aware of the complete state of the program when I update mutable data?

Because you can never be guaranteed that functions are pure when you deal with mutable data.

What about mutable data that is only ever accessible inside the scope of a function and does not leak outside (see ST). What about mutability to allow caching? Why is the mutable land so sinful for you?

It's not about it being sinful, it's about being able to reason about large software programs. I work with a lot of Java, and let me tell you that tracking state in large projects is a nightmare. I'm not sure why you need mutability to allow caching either, in fact it's much easier to do caching with immutable data, memoize in Clojure is a good example.

When software gets large, it's not practical to go through every single call that leads you to a particular place in the code, and so you start making assumptions about what the code you're calling is doing. Whenever that code decides to mutate the data in a way you didn't account for you get errors. When you have immutability that whole problem simply goes away. The language keeps track of the revisions of the data, and existing data is never changed.

I think shared mutable data is a problem.

Ensuring that you don't have shared mutable state is not trivial, one approach is to track it through monads the way Haskell does.

Because you can never be guaranteed that functions are pure when you deal with mutable data.

But I provided counterexamples in answer you are referencing. Functions can still be considered pure / referential transparent if they use mutable data structurs only itside themselves without altering shared/global state. Another (sort of) example is the pure keyword for functions in the D language (see here).

I'm not sure why you need mutability to allow caching either, in fact it's much easier to do caching with immutable data, memoize in Clojure is a good example.

Without knowing what swap! does I generally know that in Scheme / Lisp a ! is used to label a function with side-effects resp. one that changes mutable state. So I'm pretty sure the memoize function indeed uses mutability to great effect, i.e. to enable caching.

And I agree with your points about the reasoning of large Java software. I made a similar point as you do in one of my earlier blog posts.

I somehow have the feeling that we talk at cross purposes. I still found it fruitful to argue with you so far.

Perhaps you're still not experienced enough, to get a feeling where to add strictness. I'm still a Haskell noob, using it for about a year, but looking back at my experience with C++, what felt hard at the beginning and now isn't a big issue anymore.

Oh, don't worry, SPJ himself believes that while Laziness-by-default was extremely beneficial for the evolution of Haskell as a pure language, "the next Haskell will be strict".

But have you ever programmed in such a language? I haven't but I don't believe the improved static guarantees come without their price.

Although no computer scientist (indeed, I’m an economist with a dilettante’s interest here), it seems to me that future languages rather have to go in the direction of something like dependent typing. I’d like to live in a future where, say, I can specify that a function should work only on invertible matrices and have the compiler inform me when that invariant would be broken. This could go a long way to reduce the kind of errors made daily by non-programmer scientists and statisticians.

I suppose I don’t know whether such a thing is practically speaking possible; even examples in existing languages of length-encoded vectors are a bit of a struggle for me to understand. But it’d be awesome.

Yeah, I find the prospect of "mainstream" dependently typed programming languages exciting as well. But I have my doubts about the practicality of such languages and if the hassle to please the type checker will be worth the cost. For some applications this is certainly the case - otherwise there wouldn't be any proof assistants. But for general programming? I'm not knowledgable enough to have an answer to this question so I leave it at that.

It's interesting because I've had pretty much the opposite impression of Haskell: I really love immutability and laziness.

Have you written anything real on it yet?

Having a real package manager with more Haskell packages than Yum will be really great, I hope.

Don't get your hopes too high up. Arch has ghc and cabal-install and a few popular packages in the community sections, but everything else is either in AUR (which doesn't really give you any benefit over cabal) or in external unofficial repos (the problems of which should be obvious).

Well, for certain values of "real" :). I'm a college student, so I haven't written anything really "real" in any language, but I can safely say that the stuff I've written in Haskell is at least as complex as what I've written in other languages. Additionally, some of the stuff I've written I've done in multiple languages, and Haskell has always been more pleasant because of both laziness and immutability.

I have had some minor issues with laziness and I could easily see it being really annoying on other projects; however, I have not been annoyed by default immutability and really doubt I ever will be. After all, if I really want mutability for whatever reason, I can have it, just localized. I don't just not need mutation globally, I'm actively better off without it.

Also, I actually have a couple of other reasons for wanting to try Arch. For one, even user-maintained packages in a proper package manager will probably win over Cabal. Also, I'm tried of having to wait for the latest shinies I keep on reading about but also too lazy to build stuff myself. And I would definitley sacrifice stability for having the latest stuff :).

I liked your comment, some good points in there. I've two things to add:

there's a lot of complaints about laziness, and that's understandable. Unlike the functional vs imperative argument ("it feels weird because you were raised as an imperative programmer") I don't think it's fair to say that people raised in a lazy language wouldn't still find strict languages easier to reason about. But even with that in mind I think the advantages of laziness are enough that this is a case were people should bite the bullet. I do wish there were better ways to learn how to reason about laziness, however

I used to agree with you about partial functions in the standard library (head, tail) but I've since changed my stance when I saw some code that used them in a situation where it was impossible for the function to fail. There are circumstances where head and tail are safe to use and I think that they are better then a safeHead alternative, which sends you into a maybe monad. More development around partial functions is deserved, perhaps coupled with phantom types. Instead of having a function that only works on directed acyclic graphs for example, you could use phantom types + a partial function. This approach appeals to me more then making everything a complete function

Just saying, you need to try Scala. If you loved Haskell, it offers much (but not all) of what makes Haskell great, with everything you occasionally find yourself needing from OO and imperative languages.

Worth noting that Disciple is a project of only a few Australian FP academics. It's still not nearly ready for prime-time, but we believe that Disciple really represents the future of systems programming which require more effects and mutability than Haskell does comfortably.

I disagree with the author on many other points though: Ceylon, Dart, Kotlin etc. are all languages that miss the point. We need to move away from unrestrained mutability and side-effects in programming. Disciple supports mutability, but tracks it with great detail in the type system. You know what can mutate and perform side-effects and what cannot statically. Dart's type system is broken and pretty much useless, Ceylon and Kotlin are minimal improvements over Java. Even Scala doesn't get everything right with the untracked mutability.

Also, for high-level applications programming, avoiding mutation and side-effects is paramount. Sadly, the only even-close-to-mainstream language which assists with this properly at the moment is Haskell. Just because disciple gives the option of using effects more liberally doesn't mean that you should ;)

In some way I agree with you. In an ideal world we would have a language that statically tracks side effects in a very fine-grained way without forcing the user to provide enormous type annotations or other syntactical burderns. And it's true that Haskell is at the moment "the only even-close-to-mainstream language" which assists with that.

But I wonder if the way side effect tracking is implemented in Haskell is worth it for most projects? There are several problems in my opinion: First of all, the side-effect tracking is not really fine-grained. To my knowledge you can't express (easily) e.g. that in the same monadic expression a variable a can do full IO but a variable b can only some restricted form. You can sort of separate different side-effect levels with a monad stack or with some special monad like RWS but in my opinion it's too much overhead compared with imperative languages. The other alternative is to put every piece of unpure code directly in the IO monad but writing imperative code in the IO monad feels not as lightweight as doing the same in an imperative language. Like mentioned in the article even the creators of Disciple (much more credible than I am) share the sentiment that using monadic code for side-effect tracking has a high overhead.

My whole argument boils down to this: I, personally, want language features that have a good cost-benefit ratio. Put differently: I want the low hanging fruits but I can do without the higher hanging ones if that means I need to first get a ladder. So, with respect to side-effects I'd totally be content with only being able to have a binary distinction of pure & absolutely unrestricted code, e.g. being able to express that this one function is pure (in that it doesn't alter shared state) and this other one is not pure. If being able to express finer-grained side-effect tracking means too much syntactical & conceptual overhead then I don't think it is worth it.

So that's why I'm looking forward to languages like Kotlin (or Rust) that give you some of the low hangig fruits. For example, Kotlin's type system is certainly not as advanced as Haskell's (e.g. no higher-kinded types as of yet) but it has a type system nonetheless with imho good improvements over Java's that will help you find additional inconsistencies in your code. And in any case, from reading some of your posts on reddit, I know that you have different programming language needs (more guarantees) for the type of programs you write and therefore, certainly, you evaluate Haskell's features differently, so that for you the benefits outweigh the costs. However, I came to the conclusion that for most of the projects I want to do in the future a langue like Kotlin or Rust (or someday even Disciple) would have a better cost-benefit ratio than Haskell.

First of all, the side-effect tracking is not really fine-grained. To my knowledge you can't express (easily) e.g. that in the same monadic expression a variable a can do full IO but a variable b can only some restricted form.

This is exactly what Disciple fixes with effect types.

share the sentiment that using monadic code for side-effect tracking has a high overhead.

I wholeheartedly agree, but that doesn't mean people should be throwing the baby out with the bathwater. We want to control effects. If you don't do this, you're climbing a much steeper slope to understanding your software. Haskell controls effects, but somewhat brutishly. It's a future research direction to give more fine-grained control. Right now, I would still choose the heavy-handed approach than no approach at all.

C++ does not and can not have effect or region types. This is integral to Disciple's approach.

Const/mutable is very primitive compared to what Disciple does. It doesn't control any other effects either (such as writing or reading from files, which can happen anywhere).

For example, various compiler optimizations (e.g fusion) depend on the compiler's assumption that no side effects can occur in a certain function. Disciple knows this because it can infer a lack of effects. Haskell knows this because it assumes no effects except in specific circumstances. C++, you don't ever know it for certain because you could always have completely circumvented the type system - and not even with little escape hatches like unsafePerformIO but huge, gaping holes like mutable or casting.

Regarding your third paragraph, could you explain how this unsafePerformIO escape hatch differs from, say, casting in C++? You seem to be saying that the Disciple compiler can infer lack of effects despite there being a loophole, but the C++ compiler can't because C++'s type system has loopholes. The C++ compiler knows when you cast just as well as the Disciple compiler knows when you are using the escape hatch, so what's the difference?

If you use the escape hatch in Haskell, it says "All bets are off". You have no idea and it provide no guarantees as to what your program will really do.

Disciple has no escape hatches, AFAIK, but some would probably have to be added once we get a solid FFI. Certainly it would be necessary when you get down and dirty with the bits and chips, to reinterpret memory as a struct for example. However, it would not be necessary to do so with effects. You should just leave the effect types in. Otherwise the compiler may move your code to strange places thinking it's pure (like GHC does).

As for the floating point operations, right now I don't think we deal with the floating point environment stuff. But it might be worth looking into. We're currently working on solidifying the new ddc core language and an LLVM backend. Once we're up and rolling, then we'll think about extra constructs for breaking all the type guarantees.

Wouldn’t it be great to combine the best parts of languages like Java and languages like Haskell without creating a language that is as complex as Scala? ... And other sub-wishes: great IDE support, corporate backing, learning material, vivid community, data-binding / reactive programming support.

I think F# has gotten to the point where it could use some nice critical writing about areas that could be improved in this fashion. But apparently being in the .NET world is considered a crutch, when being in the Java world is a benefit.

For instance writing a C# style API is a painful process, since half of the features of F# shut off when working with an inheritance based structure. Having to use box to do a runtime cast is ugly.

Seriously though, it seems a lot of effort was put into tailoring the language to interop with the .net library, for example destructive updates via the, <- operator. I don't think these tie the language to .net though. It's essentially OCaml with the exception of parametric modules/functors. I'm sure the decision to leave this functionality out also had a lot to do with limitations of the CLR.

So while the language has certainly been shaped by .net, I don't think there are any serious barriers to wider adoption. It's probably possible to port to the JVM. Also, there are already two Javascript compilers: pit and websharper.

Wouldn’t it be great to combine the best parts of languages like Java and languages like Haskell without creating a language that is as complex as Scala?

The problem is that Haskell and Java are so different and the intersection of the 2 languages so small that when you combine them, you add 2 languages together + all the boilerplate and syntax needed to try to tie everything together. So I don't think it is possible.

You need to choose what you want to do and make sure your language features are orthogonal as much as possible.

I remember seeing a SPJ comment somewhere that said that, basically, Haskell-style Type Class based polymorphism and OO-style subtyping-based polymorphism are opposite in many ways, and don't interact very well. A language mixing both would really be kind of weird right now.

I don't like how this guy refers to unrestricted mutation as "first-class". That's the direct opposite of truth. Haskell's monadic model is the very definition of what "first-class" is supposed to mean: allow programmers to abstract over actions without resorting to hacks like C macros or Smalltalk blocks. Impure languages actually lose the ability to explicitly handle state, relying on an implicit (and therefore hard-to-reason-about) order of operations.

By "first-class" mutability support I didn't necessarily mean unrestricted mutation. What I actually meant to say was that using mutable data structure should not impose such a syntactic & conceptual overhead (for example in Haskell: special monadic syntax & monads). If this is possible with side-effect tracking (like Disciple shows) then that's absolutely great!

By the way, could you expand on your last sentence. Why is the order of operations in "impure" languages implicit?

I am surprised that he doesn't like Scala, because you can either use it with no side effects or have it as mutable as you want. I also think it is less complex than Haskell. I have tried numerous times to pick up Haskell only to stop while I learned Scala in one go. Finally, of course there are going to be multple ways to do things when you have access to varying degrees of mutability.