Personally, I use Haskell and don't care one whit about concurrency or parallelism. Haskell is first and foremost a nice language, with properties that make it easy to understand, analyze, and work with. I'm thrilled that some people find those properties useful for solving problems in concurrent and parallel programming. I find them useful for solving problems in organization of code, and plenty of other problems, too.

So that's why Haskell isn't based on a process calculus: because Haskell is not about concurrency; it's a general purpose language that can be used for everything from high-performance scientific computation down to a nicer way to approach scripting tasks. Haskell doesn't need to become a single-purpose language to be useful for solving any specific kind of problem.

It is not an accident. We were very well aware of the potential for parallelism in pure functional languages. Already in the 80s. And we had some good implementations of parallelism that predate Haskell.

there is an amusing precis of the vicissitudes of parallelism + functional programming in the 80's in this lecture by Peyton Jones starting about 5 minutes in. augustss can fill in the details for sure. http://www.youtube.com/watch?v=NWSZ4c9yqW8

There's a CSP implementation by the name of CHP if you want a process calculus.

par, though, just uses a completely different model. You can't [dead|life]lock with it, so any process calculus loses its appeal. Then, the ghc guys are just too busy to implement process calculi, but it's not necessary, either, because all the primitives and language flexibility are there.

Implementing any process calculus without having anything like forkIO, is, TBH, an exercise of ivory-tower proportions.

Pure functional languages enable you (potentially) to use concurrency without explicitly reasoning about processes and communication - when you don't have side effects, you don't need to worry about serializations. (The language implementation needs to take care of everything, but it doesn't need to expose all the nasty bits that you need in C++ or Java).

IMO, process calculus is a mechanism for describing concurrency problems and looking for solutions, but not a good way to avoid them.

Interesting. The compiler I found hasn't seen any changes since 1998, so I guess the project is dead. I wonder if that's because the developers lost interest or if there were issues with the language itself.

And you can implement the lambda calculus on a Turing machine. (And I suspect that many process calculi can also implement the lambda calculus.) What gives?

Choosing a semantics for a programming language involves more considerations than simply expressivity. One semantics for a language may be preferable to another because it makes proofs easier, or simplifies complexity analysis, or makes it clearer how one might implement the language.

When I did a quick google search before posting the link, I found a statement that the λ-calculus can be implemented using the π-calculus, but nothing about the inverse. Does that mean the two are equivalent, and is there a proof of this statement?

To show that you can implement lambda in pi is sufficient to show equivalence. Lambda was proven to be equivalent to Turing machines, and the two formalisms are the bases for defining computability in the first place.

Basically, that pi calculus is implementable at all is proof of the other way around.

A student here, What I just learned seems to imply that Turing equivalence is applicable only to functions. (some thing that calculates an output based on an input.) It does not apply to processes that are not functions. A typical concurrent program (or even ordinary programs that do IO) does not seem to fit that model.

It's been a while since I took theory, but I believe this can be resolved by considering a program and its IO as the inputs/outputs of a universal Turing machine. It's been shown that one infinite tape is equivalent to arbitrarily many tapes, and you may intuitively consider every input or output action as reading from or writing to another tape.

As to concurrent processing, I would argue that I could build a Turing machine that takes any two operating on the same tape, and interleave their execution such that concurrency is simulated. And a UTM, of course, could take your two programs and their inputs, along with my program, and simulate their behavior.

If my logic is serving me correctly, from the premises that UTMs are a subset of TMs, and all TMs are, as you say, functions, it follows that IO performing and concurrent programs qualify as functions.

Besides, if the pi calculus were strictly more powerful than Turing machines, you could bet a hoopla would have been made and the Church-Turing thesis would have been disproven.

I agree with most of your logic except for a minor part. Your definition assumes infinite input. However TM does not include infinite input (only infinite input tape - not the amount of input it starts with). To see why it is a problem, imagine how you would convert such a TM with infinite input to lambda calculus.

Pi calculus is not strictly powerful than T.M as long as finite input is considered. However T.M says nothing about infinite input and hence Pi Calculus does not violate Church Turing thesis when it deals with infinite input.

well, I assumed reactive systems could be modeled using pi calculus. But at this point my understanding is limited. (Though I have trouble seeing how continuous input can be made to work with lambda calculus.) However I plan to study further on this topic.

You have to be suspicious of claims like "strictly more powerful." Powerful according to what? If you pose a question to a problem formulated in terms of the Pi calculus, and you have an effective procedure to get a yes or no answer to that question, then you are guaranteed to be able to program a Turing machine to answer that exact same question. The Turing machine--itself--may not be a very good picture of an actual concurrent machine (in terms of deadlock analysis etc) but that doesn't mean you can't embed concurrency-related logic into a Turing machine program.

Exactly. In this case, as you've phrased it (language acceptance), it is now impossible to built a computer that faithfully approximates the pi calculus. I.e. if the pi-calculus models computations that read an infinite amount of input and produce a "yes/no" answer in finite time, then these computations would not be implementable in any machine we currently know about.

If you're modelling machines that pass messages based on an infinite stream of input, process calculi are certainly more amenable to analysis than thinking of them as Turing machines (Turing machines are a clunky way to represent most systems, actually, and it's hardly ever done). But we don't make claims about those machines we are modelling (the objects described by our calculus) as being able to give answers to a wider set of language acceptance problems than Turing machines. We model infinite processes in a convenient way--we don't offer a {machine/rewrite-system/effective procedure} that can supply answers that a Turing machine cannot.

My experience is that when people want to talk about something being "more powerful" than a TM, lambda calculus, etc., they're generally either introducing nonsense that can't be physically realized (such as actual infinities, rather than the merely unbounded resources a TM demands), talking about underspecified open systems (which thus can't be proven to not be chatting with a Halting oracle on the side, or some such rot) or modeling concepts that are impossible to work with directly using standard formalisms (which is, y'know, the whole reason we have compilers in real life).

That makes sense. I'm still a little unsure about what that really means, though. I understand that Turing equivalence has implications on what can be computed. Does this also mean the ways in which something can be computed are essentially equivalent? Given my limited understanding of the matter, I see process calculi as a way to implicitly include time in the model of computation, something which seems to be missing from the lambda calculus.

In practical terms, Turing-equivalence is roughly a generalization of Greenspun's Tenth Rule--you can write any possible program in a Turing-complete language, but nothing guarantees that there's an easier way to do so than first writing an interpreter for a better language. For instance, while Java is Turing-complete, there is a non-empty class of programs for which the easiest way to write them in Java involves first inventing Clojure.

Most directly it relates to decision problems, but you can also discuss "Turing computable functions." You can model time with lambda calculus--it may not be the most convenient formalism and it depends on what you want to prove

And let me add for completeness: In a process calculi like the pi-calculus, you can similarly embed the lambda calculus.

In my point of view, it doesn't really matter if you are based on as much as you support concurrency. Also note that there is a striking similarity between some process calculi and the lambda calculus. So in the spirit of Haskell being LC+Stuff, you could regard it as LC+Concurrency+Stuff.

LISP was based on the lambda calculus. Haskell is based on typed category theory. (The lambda calculus is still in there. Look how our vision works; every system back to prehistoric fish is still in the messy code base.)

Er, no it isn't. It's based on the typed lambda calculus (specifically system F omega). No category theory need be applied (Although a correspondence exists, just as there is a correspondence between category theory and virtually every other math discipline).

Yeah, the purpose of category theory is basically to abstract over abstractions. Also, "points" are more a set theory thing, whereas category theory prefers working with the behaviors of arrow composition while eliding elements "inside" the objects. So category theory tends to pretty consistently be what Haskell programmers would call "pointless".