No problem: I jump over to Hoogle and search for functions of type [Int] -> (Int -> Bool) -> ([Int], [Int]). It produces a short list of results, tells me what library they are in, and even gives a brief description of what the function does. I quickly see that partition is the function I want.

My personal favorite language is Python, and this is something no dynamic language can do easily. :-)

Feature in Python that I miss in Haskell: docstrings at the REPL. Being able to browse modules and see the types of all the functions and constructors they export is nice, but usually not quite enough to save having to have a browser open on the Haddock docs in the background.

That's the type he searched for. Hoogle recognises more generic functions that match that type. In fact in this case, partition is (a -> Bool) -> [a] -> ([a], [a]) ... the two arguments are in the opposite order to what he searched for. It's a very nifty tool.

findByExample goes back to Smalltalk, if not earlier. If you don't filter out side-effecting methods from the list of candidates, you can get "interesting" results; you can also implement findByExample in Haskell and it won't have this problem because of monads.

That would prevent OS-level side effects but not side effects internal to your code base. Maintaining separate black lists or white lists as a system evolves (assuming you want to search more than a fixed standard library) is a pain in the ass. Controlling side effects with monads provides semi-automatic black listing.

A type signature might be (a -> IO b) and the fact that IO is a monad is an aside from the fact that you are controlling side-effects. You can call (>>=) with IO, but you can also call fmap or (<*>) and so on. Tat IO is a monad is not what controls the side-effect.

Edit: If Steve Yegge, Cedric Beust, Jeff Atwood, etc. can tell great big whopping lies that take up an entire page, then I reserve the right to tell obtuse lies for the purposes of education in a sentence or two.

I give the Haskell community props for making a genuine attempt at addressing many of the issues that plague non-mainstream and/or high-level languages: library availability, performance, robustness, and concurrency. They also seem to do well with helping beginners learn the language. I don't even use Haskell but I agree in large part with their philosophy, goals, and means of achieving those goals.

Compare with Lisp for example. In the Lisp community, they say you don't need libraries because the language is so powerful you can write anything you want in an afternoon, and if you're a beginner who doesn't understand some concept or another, they'll scold you and tell you that you don't deserve to use Lisp.

ASDF is really just a way to declare dependencies and order-of-compilation; much like a makefile. ASDF-INSTALL which is built on top of ASDF is more like apt-get except that it only deals in source code.

You can skip the GPG checks that ASDF-INSTALL performs. Every time it hits a missing GPG key it signals a condition (sorta like an exception) and one of the restarts is to skip the check. But if skipping the check is not an option then yes, you'll have to get the keys.

For christmas, I got "The Haskell School of Expression", it's a nice introduction to haskell, with a great step-by-step mix of explanation and examples.

Monad explanations are a bit soft, but from searching the web, I am starting to grok them.

One way I kind of think of monads is "Uber Unix Pipes", in which you can 'wrap' a pipe with another pipe, which can do things to the values in the that pipe, or pass it on to another pipe.

Need to debug data? Instead of adding debug statements to a function, you can wrap them in a monad that examines the results of that computation, and prints it out. This monad can then pass on to another which does something else.

In a way, they also map to the notion of 'protocol objects' or 'context objects' without having to change the contents of what they are wrapping to support new features. I know the mapping is not 100 %, but monads can wrap monads like a onion, and you can string them together.

Unfortunately, it looks like DDC hasn't seen any real love since July, nor mailing list activity since mid December. That is unless the moved repos/lists and didn't update the page. It's a shame, I was excited about the idea too.

Out of curiosity, then, why is Hayoo better than Hoogle? (I haven't used either)

EDIT: I hadn't used either besides glancing at them when this article was posted. The only differences I see immediately is the cloud at the top of Hayoo's results, and the fact that Hayoo inexplicably required Javascript.

Is it only me who feels that unit testing, source control and regression tests have little to do with "Project Management". These are rather tools needed by developers to create software, which has little to do with project management.

If unit test frameworks, source control and regression tests are "project management" tools then you might as well say compilers, IDEs and debuggers are project management tools. IMO all of those things fall into the "developer tools" category.

Project management tools would be stuff like Microsoft Project, where the PM coordinates, schedules and otherwise plans the project.

That said, I think the project management they're talking about in the article is "project management" in the sense that you create a new "project" in an IDE, not PMP-like project management.

the developer tool chain is absolutely critical to delivering value on any kind of schedule.

So is the chairs, tables, computers, potentially a coffee machine, and they aren't a part of "project management" either. They are tools or resources available to the developer to do his job, not project management tools.

Seems to me that it's pretty much impossible to explain "why X is ready for Y". To determine that, you have to put forth a good faith effort to find reasons that it isn't ready, and come up empty handed.

As a comment to the blog posted pointed out, they need to find a real use case as well. People won't starting to use a new language because it's possible. Enthusiast may feel like learning it, and some or many (but still relatively few) may use it in production but that's about it. Assuming that everyone will just start using it because it exists there and is nice is a bit naive. I'm guessing most businesses prefer "safe" and "mature" alternatives over something new and cool, even if that results in a technically bad decision.

I nominate "concurrency" as the killer app. Even though Haskell has advantages even for single threaded programs, concurrency seems to be the main thing where Haskell is ahead of the pack (static separation of side effects being the main thing) and it also looks like it's becoming very important in the future. So I think the main selling point to focus on is concurrency.

That, my friend, is beyond elitist. Any self-respecting programmer should be able to learn any programming language they so choose. Haskell is no different in this regard. No different than learning any of the Lisp family, Fortran or Cobol for that matter. You are grossly underestimating the intelligence (whatever the heck that means to you) of the average programmer.

I hope, for the sake of variety and competition, that more programmers jump on the Haskell bandwagon.

From the bit of programming I've done in it, Haskell really isn't that hard. At least it's not as bad as being a dilettante and trying to read articles about how the use of associative arrow homorphism on Abelian groups let you write a points-free applicative grammar in five lines or less would lead you to believe.

A lot of the crazy stuff in Haskell is people looking for deep abstractions. If you see a bunch of confusing terminology look at it in the same light as (usefully applied) design patterns: experts searching for a better way to write their code. Just like you can write C++ or Java without using design patterns, you can write Haskell without going too far into category theory la-la land. It just may not be as concise as it could be.

As monads (or lite versions of them) start making their way into more mainstream programming languages, like C#, I think people are going to start waking up to them and realize just how useful they are in making otherwise difficult and tedious problems a lot simpler.

In my years teaching, I have found that children learn Haskell easily due to natural curiosity. Also, carpenters learn Haskell easily. By "learn Haskell", I really mean "learn some fundamental concepts of programming and apply them in Haskell".

However, standard programmers have a really hard time grasping the basics of programming and therefore Haskell (see a problem with that statement?). By "standard programmer" I mean "those who have been exposed to atrocities, for example Java, and internalised them as legitimate".

I don't think "average intelligence" is the correct measure. I agree that there is a "problem" but it is because of programmers unwilling or unable to learn since they have too much invested in their state of misinformation.

And it forces you to think in a different way than someone who has never programmed and has deadlines to meet can learn quickly.

I disagree. Haskell lets your write your programs in terms of computation. C (and Perl, and PHP) make you write your programs in terms of the computer.

If all you've ever done is bang keys until the computer is happy, then obviously you'll prefer programming languages that model computers. If you prefer to think about the computation that you want the computer to perform, and then let the computer work out how to perform it, you'll like languages like Haskell. Old habits are hard to break.

There are many important features a language ought to have. High on that list is maintainability by the average coder. Or heck, by even a coder outside of the top 5% of programmers. Haskell fails this.

Monads are no more complicated than, say, the method lookup rules in C++.

People who don't want to understand exactly how they work and how to write your own, can just think of it as "first class statements". Just like how people can use STL, but not write STL, or how people can use LINQ, but not write their own LINQ backend (like LINQ to SQL, etc.).

That works for people? I'm shit with someone else's code until I understand it well enough to implement myself, probably not as well and certainly taking some time, but doable. So for me, "using monads" == "writing my own monads".

Every time I look at monads, I feel like I'm a few puzzle pieces away from seeing Haskell clearly, some having it "click". Never do get there, though, so using Haskell seriously is pretty much closed to me right now. :/

They are no more difficult than understanding the details of something as mundane as std::auto_ptr. I think you'll find most C++ programmers would fail hard if they were to sit down and re-implement some portions of STL. I mean, look at the code, it's far from simple.

And yes, that works for people. Lots of companies have Haskell experts write a monadic library for some domain specific task (e.g. financial analysis) and then they have domain experts (not programmers) actually write the code using those libraries. You don't need to understand monads to use monads. If you want to be an expert you will, but you can quite happily defer it while you're a newbie (just like most people wouldn't try template meta programming their first week in C++).

I don't think so. Do you use list comprehensions? You can write them in terms of the list monad:

do x <- [1,2]
y <- [3,4]
return (x,y)

(This yields [(1,3),(1,4),(2,3),(2,4)].)

There is nothing magical or difficult to understand about this. In fact, in other programming languages, this would be magic. It would be some magical special case in the language. In Haskell, you just need to read the definition of >>= in the List monad, and it makes perfect sense. There is no magic at all!

Just because you don't know (or worse, are not willing to learn) about monads or functional programming in general it doesn't mean that you can freely bash a language that you don't care about. How about starting with a few tutorials and maybe you will end up being amongst that top 5% folks?

That depends on what your opinion is used for. If you're trying to decide whether Haskell is right for you or not, you can learn as much about the language as you want before deciding, from very little to a great deal.

If you're planning on getting on message boards and declaring that "language X sucks" when you obviously don't understand it very well, expect not to be taken seriously.

I think opinions are by their nature subjective, and so validity too ends up being nothing more than a judgement call.

I'm not sure that reaching a certain point in Haskell dramatically changes your view of the language, unless that certain point is the ability to write slightly non-trivial things with it. As soon as I started playing with it I felt like it meshed well with me. Some people obviously don't feel that way.

But there are a lot of folks that don't know anything about the language at all and get on reddit and bitch about it. You can usually tell these people from their favorite claims:

Monads are too hard.

I want to litter my code with debug statements that print stuff out to the screen. Haskell doesn't allow this, or makes it painful.

Encapsulation of side-effects isn't worth the effort.

The first just tells you that they haven't spent any time actually trying to write something with Haskell, they've just read blogs and tutorials that claim to explain monads and come away confused. Actually writing Haskell and using monads to solve real problems quickly shows them to be nothing complex at all.

The second, also often parroted, ignores two fundamental things about Haskell: the first is that you can easily print debug messages to the screen using Debug.trace. The second is that in FP, you rarely if ever need to, because there's no mutable state, so you never wonder "what's the value of var here." var never changes.

The third, again, comes from not realizing how easy it is to use pure code in all sorts of different situations. It's a bit like someone in the 1980s claiming that there's no good reason to write a library in a way that makes it reentrant.

Having said all that, there's nothing wrong with not liking Haskell. It's the trolling that's annoying.

It depends on what your idea of "something practical" is. If something practical is an operating system kernel, maybe Haskell isn't a good thing to recommend.

This may seem silly to you, but there are a huge number of folks around who write off everything other than C/C++ because they don't consider other languages practical for systems programming. The irony of course is that most of them don't do systems programming.

I meant wrong for a production language, as opposed to an academic or research. The memory use of lazy languages is hard to predict and the performance benefits are underwhelming or even negative. You can't even use foldl in Haskell, one of the fundamentals of functional programming, you have to use foldl' which has strictness annotations. All of the benchmarks you see of Haskell have strictness annotations all over the damn place. I wouldn't want to have to debug weird space leaks that aren't just caused by me being an idiot.

I just think that strict is a better default than lazy. I wouldn't oppose laziness where there is good reason, but I think that it is usually obvious where lazy is the better option and it is easy to use things like memoization and lazy lists in any decent language. I find laziness makes it much harder to reason about what will evaluate when (especially if you add a bit of IO), but you need to figure that out not just to avoid space leaks, but to understand the performance of your code.

I think Rhoomba's point, which (s)he isn't at all making convincingly, is that laziness by default has a number of downsides. That's a legitimate criticism which doesn't deny the fact that laziness is often useful.

Because the higher abstraction makes it harder to reason about what your code is doing, and strict seems to be more commonly useful as well as faster. If you use an strict language think about how frequently you use the lazy approach. For file IO, sure, all the time. But that is a solved problem anyway, just make your files iterable ala Python etc. Other than that all I can think of recently is returning query results, which was implement using an iterator. And that wasn't very tricky and is made trivial with features like Scala's streams and projections, or Python's generators.

When using a strict language you don't use the lazy approach because it's incredibly clunky unless the language supports it. When using a lazy language you use laziness all the time, in ways which wouldn't work in a strict language (e.g. for separating generation/selection in an algorithm).

Streams and generators give you one dimensional laziness. What if you want to lazily traverse a multidimensional structure?

How can you quantify 'more commonly useful' and 'faster'? How do you measure those?

Also, I don't think it's easier to understand the performance (or lack thereof) of strict code, as evidenced by the saying 'profile before optimizing'. That same saying holds true in Haskell as well as C.

If you don't understand something, it's completely OK to say so and ask for a clarification, like the last time we discussed similar subjects on Reddit and I had to explain to you what risk means in a business context (or really in any situation in which a group engages in a motivated common pursuit,) and why people and groups might want to try to control the risks associated with their activities.

When you make secondary investments, say in a tool, it's often better to measure the return of secondary, or qualitative, benefits rather than primary benefits, simply because secondary benefits show up immediately while primary benefits (bottom line money) might be much harder to measure and they might take such a long while to materialize that it's hard to actually connect them to the event being measured.

When choosing software development tools, a secondary benefit you might want to measure could be the number of hours spent doing unpaid warranty repair work over the lifetime of a project compared to the sales profitability of a project.

Different software development tool communities advertise the tools they are pushing in different contexts, and as providing a variety of secondary benefits. The successful tools either provide secondary benefits which software development organizations wish to have over other possible benefits, or engage in misleading advertising (RoR.)

I hope that this clarified the concepts - I agree that if one has only looked at software development from an individual junior developer's perspective, it might be hard to see why, say, planning and aligning people to common objectives help a group or company succeed in their mission.

Haskell is, at least as far as I know, the only language with a sound strategy for concurrency/parallelism. All the others ones rely on programmers not making subtle mistakes - if they do, then spectacular crashes may happen.

I really don't see any other language as well positioned as Haskell on this front (shared memory concurrency for commodity desktop computing). Erlang people will claim they are, but they only support a single concurrency paradigm, whereas the real world needs more than one. Haskell has both shared state concurrency with locks, or with transactions, or message passing with threads, or annotation-based pure parallelism, and very soon it'll have the only proper implementation of nested data parallelism (hopefully with a GPU backend!).

I really don't see a competitor here, so unless one shows up before manycore becomes pervasive, the writing seems to be on the wall.

Haskell is, at least as far as I know, the only language with a sound strategy for concurrency/parallelism. All the others ones rely on programmers not making subtle mistakes - if they do, then spectacular crashes may happen.

I'd argue that Oz and Alice ML can be considered to have sound strategies for concurrency/parallelism.

Could you describe the "only sound strategy for concurrency" claim in broader terms?

Is there a class of business problem solving programs it's not practical to write in, say, standard Java, which would be practical to write in standard Haskell? Is the inverse true? Is either of the groups significantly smaller or larger than the other?