I'm just reading up on lambda calculus to "get to know it". I see it as an alternate form of computation as opposed to the Turing Machine. It's an interesting way of doing things with functions/reductions (crudely speaking). Some questions keep nagging at me though:

What's the point of lambda calculus? Why go through all these functions/reductions? What is the purpose?

As a result I'm left to wonder: What exactly did lambda calculus do to advance the theory of CS? What were it's contributions that would allow me to have an "aha" moment of understanding the need for its existence?

Why is lambda calculus not covered in texts on automata theory? The common route is to go through various automata, grammars, Turing Machines and complexity classes. Lambda calculus is only included in the syllabus for SICP style courses (perhaps not?). But I've rarely seen it be a part of the core curriculum of CS. Does this imply it's not all that valuable? Maybe not and I maybe missing something here?

I'm aware that functional programming languages are based on lambda calculus but I'm not considering that as a valid contribution, since it was created much before we had programming languages. So, really what is the point of knowing/understanding lambda calculus, w.r.t. its applications/contributions to theory?

$\begingroup$In a way, its contribution was to create the field. Don't forget that Church came up with lambda calculus first, but it wasn't at first seen as a universal model of computation.$\endgroup$
– Dan HulmeMar 24 '14 at 10:21

$\begingroup$In my core studies I had Functional Programming which discussed Haskell and a little bit of Lisp. The successor to that was Principles of Programming Languages, which used ML and introduced lambda calculus. As some answers show, that's really where lambda calculus belongs: in a class about programming languages, typing, etc.$\endgroup$
– ShazMar 25 '14 at 12:50

7 Answers
7

It is a simple mathematical foundation of sequential, functional, higher-order
computational behaviour.

It is a representation of proofs in constructive
logic.

This is also known as the Curry-Howard correspondence. Jointly,
the dual view of $\lambda$-calculus as proof and as (sequential,
functional, higher-order) programming language, strengthened by the algebraic feel
of $\lambda$-calculus (which is not shared by Turing machines), has lead
to massive technology transfer between logic, the foundations of mathematics, and programming.
This transfer is still ongoing, for example in homotopy type theory. In
particular the development of programming languages in general, and
typing disciplines in particular, is inconceivable without
$\lambda$-calculus. Most programming languages owe some degree of debt to
Lisp and ML (e.g. garbage collection was invented for Lisp), which are direct descendants of the $\lambda$-calculus. A
second strand of work strongly influenced by $\lambda$-calculus are
interactive proof assistants.

Does one have to know $\lambda$-calculus to be a competent programmer, or
even a theoretician of computer science? No. If you are not interested
in types, verification and programming languages with higher-order
features, then it's probably a model of computation that's not
terribly useful for you. In particular, if you are interested in complexity theory, then
$\lambda$-calculus is probably not an ideal model because the basic
reduction step $$(\lambda x.M) N \rightarrow_{\beta} M[N/x]$$ is powerful: it can make an arbitrary number of copies on $N$, so
$\rightarrow_{\beta}$ is an unrealistic basic notion in accounting for
the microscopic cost of computation. I think this is the main reason why
Theory A is not so enamoured of $\lambda$-calculus. Conversely, Turing machines are not terribly inspirational for programming language development, because there are no natural notions of machine composition, whereas with $\lambda$-calculus, if $M$ and $N$ are programs, then so is $MN$. This algebraic view of computation relates naturally to programming languages used in practice, and much language development can be understood as the search for, and investigation of novel program composition operators.

$\begingroup$This is a very nice answer.$\endgroup$
– Suresh VenkatMar 24 '14 at 5:41

9

$\begingroup$Concerning the "unrealism" of $\beta$-reduction: Beniamino Accattoli and Ugo Dal Lago recently proved a surprising result stating that the number of $\beta$-steps to normal form in any standard reduction strategy (e.g., leftmost-outermost) is an invariant complexity measure. This means that, even if implementing $\beta$-reduction per se is expensive, counting the number of reductions is not an unrealistic complexity measure (for instance, it would not affect the definition of the class $\mathsf P$).$\endgroup$
– Damiano MazzaMar 24 '14 at 11:42

5

$\begingroup$@DamianoMazza Since it's a new result, it could not have been influential in the history of Theory A. In addition, I think this resuld holds only for some notions of reduction. IIRC Asperti's paper P = NP, up to sharing shows that P and NP collapse if you have an 'optimal' reduction strategy in the sense of J.-J. Levy.$\endgroup$
– Martin BergerMar 24 '14 at 13:02

6

$\begingroup$@MartinBerger: yes of course. My comment was meant to add information on the complexity of $\beta$-reduction, not at all to "correct" your statement about the lack of influence on Theory A (which I repeated in my answer). By the way, Accattoli and Dal Lago's result holds for usual, leftmost-outermost $\beta$-reduction (cf. p.2, c.2, l.11 of their paper). That's why it's so interesting (and worth mentioning). Asperti's result concerns, as you say, Lévy-optimal reduction, which is not a $\beta$-reduction strategy (in particular, leftmost-outermost is not Lévy-optimal).$\endgroup$
– Damiano MazzaMar 24 '14 at 16:15

I think $\lambda$-calculus has contributed in many ways to this field, and still contributes to it. Three examples follow, and this is not exhaustive. Since I am not a specialist in $\lambda$-calculus, I certainly miss some important points.

First, I think having different models of computation that turn out to represent the exact same set of functions was at the origin of the Church-Turing thesis, and $\lambda$-calculus played a major role, alongside with Turing machines and $\mu$-recursive functions.

Second, regarding functional programming language, I do not understand as a not valid contribution: Basically, all our models of computations were invented long before anything happened in Computer Science! Thus $\lambda$-calculus brought another view of computation, in some sense orthogonal to Turing machines, that is very fruitful in the field of programming languages (which is part of the field of theory of computation).

Finally, and as a more specific example, I think of Implicit Computational Complexity which aims at characterizing complexity classes by means of dedicated languages. The first results such as Bellantoni-Cook's Theorem were stated in terms of $\mu$-recursive functions, but more recent results use the vocabulary and techniques of $\lambda$-calculus. See this Short introduction to Implicit Computational Complexity for more and pointers, or the proceedings of the DICE workshops.

Apart from the foundational role of the $\lambda$-calculus, which was mentioned in all other answers, I would like to add something on

What exactly did the lambda calculus do to advance the theory of CS?

I believe that concurrency theory is one field of CS which has been tremendously influenced by the compositional view mentioned by Martin Berger. Of course, the $\lambda$-calculus itself is not a concurrent language, but its "algebraic spirit" permeates the definition and development of modern process calculi. I think it is fair to say that process algebras are descendants of the $\lambda$-calculus more than they are of automata and Turing machines and, in general, concurrency theory wouldn't be what it is today without the import of the $\lambda$-calculus.

Besides concurrency, I am happy to see implicit computational complexity (ICC) mentioned in one of the answers (it is a field in which I am personally involved). However, it must be said that, so far, ICC has no use in CS theory outside of programming languages and, in a very limited way, software verification. This is just an example of a more general situation: the modular, compositional, highly structured view of computation underlying the $\lambda$-calculus and predominant in "Theory B" seems to bring little insight into the deep problems of interest in "Theory A". Why this is so is, for me, an interesting and at the same time frustrating subject of reflection. (See this question for a related discussion).

(As a side note, let me mention that, thanks to its deep connections with proof theory (Curry-Howard), the $\lambda$-calculus has interesting applications also outside of CS "proper", in particular in set theory. I am especially alluding to recent work on classical realizability, a research program developed from the early 2000s onward by Jean-Louis Krivine (and several other people now, such as Alexandre Miquel, the lectures found on his web page are an excellent introduction to the subject). From the model-theoretic standpoint, classical realizability may be seen as a "non-commutative" generalization of Cohen's forcing, yielding models of set theory impossible to obtain with forcing).

$\begingroup$Good point about concurrency theory: the development of types for interacting systems, pursued mostly by K. Honda and his coworkers, is, to a substantial part, about rephrasing types for $\lambda$-calculi as types for interactive systems. The key bridge that makes this all work is Milner's Functions as processes, giving a translation from $\lambda$ to $\pi$. This lead to a reverse technology transfer already: most Hoare logics for ML-like programming languages were born as logics for typed $\pi$-fragments, and then pushed back via Milner's encoding.$\endgroup$
– Martin BergerMar 24 '14 at 12:57

5

$\begingroup$If I could clone myself, I would make a duplicate to look into P/NP using BLL and realizability. Logical relations seem to not be "natural proofs", the linear type discipline ensure you cannot relativize, and BLL's polytime completeness theorems seem to let you avoid worrying about whether or not there are classes of algorithms you've missed. The relationship between linearity and representation theory suggest connections to GCT, too. I suppose all this is why you are tantalized and frustrated, though. :)$\endgroup$
– Neel KrishnaswamiMar 24 '14 at 16:28

$\begingroup$Re B vs. A: lambda-calculus is only about structuring the same computations better, but can't, for instance, produce better algorithms. By cut-elimination and the subformula property on the result, any program with a first-order type can be written without first-class functions. But cut-elimination corresponds to duplicating code: so we find again that you don't need higher-order functions if you are willing to do enough copy-pasting. (Reynolds's defunctionalization allows you to avoid even the copy-pasting, but is a global transformation, so it's better left to a compiler).$\endgroup$
– BlaisorbladeMar 24 '14 at 18:32

$\begingroup$Anecdotally speaking, my comment is motivated by programming with an algorithmist — he's great, but he does seem to abstract much less than I find desirable. I don't claim that's general, but I claim that abstraction in the code is often not needed/emphasized when writing algorithms. (Consider how many quicksort implementations inline the partition function — I find that inacceptable).$\endgroup$
– BlaisorbladeMar 24 '14 at 18:38

Your questions can be approached from many sides. I'd like to leave the historical and philosophical aspects on the side and address your main question, which I take to be this:

What's the point of lambda calculus? Why go through all these functions/reductions?

What is the point of Boolean Algebra, or Relational Algebra, or First-Order Logic, or Type Theory, or some other mathematical formalism/theory? The answer is that they have no inherent purpose to them, even if their designers created them for some purpose or another. Leibniz, when erecting the foundations of Boolean Algebra, had a certain philosophical project in mind; Boole studied it for his own reasons. de Morgan's work on Relational Algebra also was motivated by various projects of his; Peirce and Frege had their own motivations for creating modern logic.

The point is: whatever reason Church may have had when creating lambda calculus, the point of lambda calculus varies from one practitioner to another.

To someone it's a convenient notation for talking about computations; an alternative to Turing Machines, and so on.

To another it's a solid mathematical basis on which to build a more sophisticated programming language (e.g. McCarthy, Stanley).

To a third person it's a rigorous tool for giving the semantics of natural as well as programming languages (e.g. Montague, Fitch, Kratzer).

I think Lambda calculus is a formal language that is worth studying for its own sake. You can learn the fact that in untyped lambda calculus we have these little beasts called 'Y-combinators', and how they help us define recursive functions and make the proof of undecidability so elegant and simple. You can learn the amazing fact that there is an intimate correspondence between simply typed lambda calculus and a type of intuitionistic logic. There are many other interesting topics to explore (e.g. how should we give the semantics of lambda calculus? how can we turn lambda calculus into a deductive system like FOL?)

$\begingroup$I am not buying this "for its own sake" argument. The point of a mathematical formalism is to elucidate our understanding of some concept. What is elucidated may develop over time, but unless a formalism helps us think more clearly about some idea, it usually dies out. In that sense it's valid to aks how does lambda calculus elucidate the concept of computation in a way that is not subsumed by TMs.$\endgroup$
– Sasho NikolovMar 23 '14 at 21:16

1

$\begingroup$I think one can study lambda calculus without ever thinking of reduction and substitution as computation. If I'm right and that is in fact possible, then we can have interest in lambda calculus even if we're not interested in computation at all. But thanks for your comment; I'll try to edit my answer accordingly as soon as I get a chance.$\endgroup$
– Hunan RostomyanMar 23 '14 at 21:32

Turing argued that Mathematics can be reduced to a combination of reading/writing symbols, chosen from a finite set, and switching between a finite number of mental 'states'. He reified this in his Turing Machines, where symbols are recorded in cells on a tape and an automaton keeps track of the state.

However, Turing's machines are not a constructive proof of this reduction. He argued that any 'effective procedure' can be implemented by some Turing Machine, and showed that a Universal Turing Machine can implement all of those other machines, but he didn't actually give a set of symbols, states and update rules which implement Mathematics in the way that he argued. In other words, he did not propose a 'standard Turing Machine', with a standard set of symbol which we can use to write down our Mathematics.

Lambda Calculus, on the other hand, is precisely that. Church was specifically trying to unify the notations used to write down our Mathematics. Once it was shown that LC and TMs are equivalent, we could use LC as our 'standard Turing Machine' and everyone would be able to read our programs (well, in theory ;) ).

Now, we could ask why treat LC as a primitive, rather than as a TM dialect? The answer is that LC's semantics are denotational: LC terms have 'intrinsic' meaning. There are Church numerals, there are functions for addition, multiplication, recursion, etc. This makes LC very well aligned with how (formal) Mathematics is practiced, which is why many (functional) algorithms are still presented directly in LC.

On the other hand, the semantics of TM programs are operational: the meaning is defined as the behaviour of the machine. In this sense, we can't cut out some section of tape and say "this is addition", because it is context-dependent. The machine's behaviour, when it hits that section of tape, depends on the machine's state, the lengths/offsets/etc. of the arguments, how much tape will be used for the result, whether any previous operation has corrupted that section of tape, etc. This is a horrendous way of working ("Nobody wants to program a Turing Machine"), which is why so many (imperative) algorithms are presented as pseudocode.

other answers are good, here is one additional angle/reason for consideration that meshes with others yet might be even more definitive, nevertheless may be harder to keep clearly in mind as the old origins are lost somewhat in the sands of time:

historical precedence!

Lambda calculus was introduced at least as early as 1932 in the following ref:

A. Church, "A set of postulates for the foundation of logic", Annals of Mathematics, Series 2, 33:346–366 (1932).

the Turing Machine was introduced in ~1936, so Lambda Calculus predates the appearance of the TM by several years!

so in other words a basic answer is that Lambda Calculus is in many ways the ultimate legacy system of TCS. its still around in much the same way that Cobol is even though not as much new development goes on in the language! it appears to be the earliest Turing Complete computation system introduced and even predates the fundamental idea of Turing Completeness. it was only later retrospective analysis that showed that Lambda Calculus, Turing machines, and the Post Correspondence Problem were equivalent and introduced the concept of Turing equivalence and the Church-Turing thesis.

Lambda calculus is simply the way to study computation from a logic-centric pov more in terms of representing it as math theorems & logical formula derivations etcetera. it also shows the deep relationship between computing and recursion and the further tight coupling with mathematical induction.

this is a somewhat remarkable factoid because it suggests that in many ways the (at least theoretical) origins of computing were fundamentally in logic/mathematics, a thesis advanced/expanded in detail by Davis in his book Engines of Logic/Mathematicians and the origins of the computer. (of course the origins & fundamental role of Boolean algebra also further reinforce that conceptual historical framework.)

hence, dramatically, one might even say Lambda calculus is a bit like a pedagogical time machine for exploring the origins of computing!

$\begingroup$addendum, Lambda calculus also seems to have been highly influenced by Principia mathematica by Whitehead/Russell which also was a major inspiration for Godels thm. some of this research was also inspired by Hilberts 10th problem at turn of century which asked for an algorithmic solution before "algorithm" was precisely (mathematically) defined, and in fact that quest is largely what lead to the later precise technical definition.$\endgroup$
– vznMar 25 '14 at 1:48

$\begingroup$btw/clarification/iiuc it was actually Post canonical systems that were studied 1st by Post and apparently the simpler Post Correspondence Problem is a special case. also it was Kleene who was instrumental in developing the concept of Turing completeness (not nec under that name) by helping prove all 3 major systems interchangeable/equivalent (TM, Lambda Calculus, Post canonical system).$\endgroup$
– vznMar 25 '14 at 15:46

I have just come across this post and despite my post being rather late in the day (year!), I thought that perhaps my "penny's worth" may be of some use.

Whilst studying the subject at university, I had a similar thought on the matter; so, I posed the question of "why" to the lecturer and the response was: "compilers". As soon as she mentioned it, the power behind reduction and the art of assessing how best to manipulate it suddenly made the whole purpose of why it was and still is a potentially useful tool.

Well, that so to speak was my "aha" moment.

In my opinion, we often consider high level languages, patterns, automata, algorithm-complexity etc. useful because we can relate them to the 'task' at hand; whereas lamdba calculus seems a bit too abstract. However, there are still those out there who work with languages at a low level - and I imagine lambda calculus, object calculus and other related formalisations have helped them to understand and perhaps develop new theories and technologies from which the average programmer can then benefit. Indeed, it is probably not a core module for that reason, but (for the reasons I have stated) there will be the odd few - other than academics - who may find it integral to their chosen career path in computing.

$\begingroup$What was the "aha" on compilers?$\endgroup$
– PhDDec 19 '14 at 8:51

$\begingroup$Your last paragraph seems entirely speculative and you never actually explain why the one word "compilers" answers the question.$\endgroup$
– David RicherbyDec 19 '14 at 9:27

$\begingroup$@PhD: Beta-reduction & substitution aren't used when running programs, but are used inside optimizing compilers. That's not the main importance of lambda-calculus, but it's a very concrete application.$\endgroup$
– BlaisorbladeJan 8 '15 at 13:16