I also want to point out that you can catch logic errors with a static type system, if you encode the logic in the type system. You would never get that from a direct translation from another language. Many Haskell libraries actually do this work for you, forcing you to write correct programs, which is a huge benefit from an ecosystem perspective.

Edit: Whoops, thought this was in /r/programming. I'm just preaching to the choir.

In practice, this only works well for "relatively simple" invariants (for some value of "relatively simple"); because if you encode the logic in the type system, you have to also encode the knowledge that this logic is respected in a way that the type system understands. This gets very painful for sophisticated invariants; in Haskell, you are essentially proving everything by hand, with as much support as you can get from the inference engine.
With dedicated languages that study the "code and proof" cohabitation (Why3 for example) you can go a bit further, but you're still relatively limited: the effort to prove something is often much greater that is justified by the safety requirements you have. Going in the limit and proving full correctness statically does not economically make sense today, for most problem language.

The right thing to do, I believe, is to choose the compromise between ease of programming and safety requirements adequate to the task at hand, and correspondingly choose where you stand on the static/dynamic checking continuum (and hope that your language allow to vary along that continuum easily). Hindley-Milner inference¹ basically comes for free (... once you have rejected the language features that make it difficult), so they're the bottom line, but more sophisticated static features may or may not make "economic" sense in a given situation.

¹: even inside HM + data types, you can do relatively sophisticated things (eg. encoding non-empty lists statically or using parametricity to get strong invariants), but it will also have a cost in clarity of the program.

We use types extensively to prevent both logic errors as well as simple but potentially difficult to track down errors. For instance we use types for type safe times that are in different time zones and also for currencies. Dependent typing would make some logic typing much easier in some of our applications and I have to make some time to look at how Agda might be able to help us.

I more and more believe that safety outweighs near term productivity. It enhances long term productivity because bugs are expensive, especially in my business. Software is too complex to say things like, 'Well this is probably a bug but it won't impact customers.' That may be true until it does at some point in the future. I hope to see Haskell continue to implement more and more type features to enable my team to build applications that I can feel good about their correctness.

The question is not whether or not it is hard to encode the invariants, but whether or not libraries that do that kind of hard encoding can then abstract over the difficult internal proofs to provide a simple user interface. I don't know the answer to that, though.

whether or not libraries that do that kind of hard encoding can then abstract over the difficult internal proofs to provide a simple user interface

Sure they can. Coq has a feature that allows you to extract ML or Haskell programs. This, I think, should be more heavily used for core language libraries. Not sure if Agda/Idris can do the same, but it sure would be great if they could.

If the Agda is readable and there are proofs that the Haskell code is equivalent, and if the generated Haskell can be given haddock comments easily, then it hardly matters what the resulting Haskell looks like. I'm not at all familiar with Agda though, and unsure whether this is the case.

Just watched a Common Lisper colleague for two days try to figure out why a JSON encoding library was throwing cryptic exceptions on a certain value in production. In this case it actually took the whole server down due to an overflowed max exception depth. Just now it was discovered that the library is incapable of encoding a cons. Static types would have caught this weeks ago. I see the same kind of things over and over. But here, I am indeed preaching to the choir.

If that's a non-rhetorical question, one answer is the Google Closure compiler which adds a simple static type system, with inference, that knows about ints, strings, functions, arrays and structural typing of objects. So it stops typos, type mismatches, null/undefined are not values of all types, etc. The syntax is lame (comments), the system is a bit quirky and the closure compiler is slow, but it will save you from dynamic errors like that.

And now that you mention a compiler, I understand the immediate value of embedding JavaScript DSLs in Haskell: they provide scope checking (i.e. protection from typos) and basic type inference for free.

Yeah. FWIW there's a wiki page I wrote about JS. Might be helpful to liberate you from the hell of JS. GHCJS and UHC are both very promising and actively developed.

I also wrote a limited compiler for Haskell to JS which I'm already using to generate some pages in production. I am slowly rewriting pages in Haskell when the time comes to improve them. The compiler's not public yet, as it sucks.

ClojureScript is a new compiler for Clojure that targets JavaScript. It is designed to emit JavaScript code which is compatible with the advanced compilation mode of the Google Closure optimizing compiler.

I'm not certain, but I think that means you get type checking and compile. I'm very uncertain about that though.

IIRC there was a reply to his blog post by Sam Tobin-Hochstadt, who pointed out that the had done similar experiments in his thesis. I'm not sure it would have made sense to repeat exactly the same thing.