**Chris:** I'll be working on this page heavily until 11--11:30 or so. Sorry not to do it last night, I crashed.

+ξ

+Θ′

+≡

~~~~

#Recursion: fixed points in the Lambda Calculus#

@@ -26,7+29,7 @@ How could we compute the length of a list? Without worrying yet about what Lambd

In OCaml, you'd define that like this:

let rec length = fun xs ->

- if xs == [] then 0 else 1 + length (tail xs)

+ if xs = [] then 0 else 1 + length (tail xs)

in ... (* here you go on to use the function "length" *)

In Scheme you'd define it like this:

@@ -51,7+54,7 @@ The main question for us to dwell on here is: What are the `let rec` in the OCam

Answer: These work a lot like `let` expressions, except that they let you use the variable `length` *inside* the body of the function being bound to it---with the understanding that it will there be bound to *the same function* that you're *then* in the process of binding `length` to. So our recursively-defined function works the way we'd expect it to. Here is OCaml:

If you instead use an ordinary `let` (or `let*`), here's what would happen, in OCaml:

let length = fun xs ->

- if xs == [] then 0 else 1 + length (tail xs)

+ if xs = [] then 0 else 1 + length (tail xs)

in length [20; 30]

(* fails with error "Unbound value length" *)

@@ -94,7+97,7 @@ We can verify this by wrapping the whole expression in a more outer binding of `

let length = fun xs -> 99

in let length = fun xs ->

- if xs == [] then 0 else 1 + length (tail xs)

+ if xs = [] then 0 else 1 + length (tail xs)

in length [20; 30]

(* evaluates to 1 + 99 *)

@@ -131,27+134,27 @@ So how could we do it? And how do OCaml and Scheme manage to do it, with their `

You'd find that it works! This is because `define` in Scheme is really shorthand for `letrec`, not for plain `let` or `let*`. So we should regard this as cheating, too.

-3. In fact, it *is* possible to define the `length` function in the Lambda Calculus despite these obstacles. This depends on using the "version 3" encoding of lists, and exploiting its internal structure: that it takes a function and a base value and returns the result of folding that function over the list, with that base value. So we could use this as a definition of `length`:

+3. In fact, it *is* possible to define the `length` function in the Lambda Calculus despite these obstacles, without yet knowing how to implement `letrec` in general. We've already seen how to do it, using our right-fold (or left-fold) encoding for lists, and exploiting their internal structure. Those encodings take a function and a seed value and returns the result of folding that function over the list, with that seed value. So we could use this as a definition of `length`:

- \xs. xs (\x sofar. successor sofar) 0

+ \xs. xs (\x sofar. succ sofar) 0

- What's happening here? We start with the value `0`, then we apply the function `\x sofar. successor sofar` to the two arguments <code>x<sub>n</sub></code> and `0`, where <code>x<sub>n</sub></code> is the last element of the list. This gives us `successor 0`, or `1`. That's the value we've accumuluted "so far." Then we go apply the function `\x sofar. successor sofar` to the two arguments <code>x<sub>n-1</sub></code> and the value `1` that we've accumulated "so far." This gives us `two`. We continue until we get to the start of the list. The value we've then built up "so far" will be the length of the list.

+ What's happening here? We start with the value `0`, then we apply the function `\x sofar. succ sofar` to the two arguments <code>x<sub>n</sub></code> and `0`, where <code>x<sub>n</sub></code> is the last element of the list. This gives us `succ 0`, or `1`. That's the value we've accumulated "so far." Then we go apply the function `\x sofar. succ sofar` to the two arguments <code>x<sub>n-1</sub></code> and the value `1` that we've accumulated "so far." This gives us `two`. We continue until we get to the start of the list. The value we've then built up "so far" will be the length of the list.

We can use similar techniques to define many recursive operations on

-lists and numbers. The reason we can do this is that our "version 3,"

+lists and numbers. The reason we can do this is that our

fold-based encoding of lists, and Church's encodings of

numbers, have a internal structure that *mirrors* the common recursive

operations we'd use lists and numbers for. In a sense, the recursive

structure of the `length` operation is built into the data

structure we are using to represent the list. The non-recursive

-version of length exploits this embedding of the recursion into

+definition of length, above, exploits this embedding of the recursion into

the data type.

This is one of the themes of the course: using data structures to

-encode the state of some recursive operation. See discussions of the

+encode the state of some recursive operation. See our discussions later this semester of the

[[zipper]] technique, and [[defunctionalization]].

-As we said before, it does take some ingenuity to define functions like `tail` or `predecessor` for these encodings. However it can be done. (And it's not *that* difficult.) Given those functions, we can go on to define other functions like numeric equality, subtraction, and so on, just by exploiting the structure already present in our encodings of lists and numbers.

+As we've seen, it does take some ingenuity to define functions like `tail` or `pred` for these encodings. However it can be done. (And it's not *that* difficult.) Given those functions, we can go on to define other functions like numeric equality, subtraction, and so on, just by exploiting the structure already present in our encodings of lists and numbers.

With sufficient ingenuity, a great many functions can be defined in the same way. For example, the factorial function is straightforward. The function which returns the nth term in the Fibonacci series is a bit more difficult, but also achievable.

@@ -159,28+162,22 @@ With sufficient ingenuity, a great many functions can be defined in the same way

However, some computable functions are just not definable in this

way. We can't, for example, define a function that tells us, for

-whatever function `f` we supply it, what is the smallest integer `x`

-where `f x` is `true`. (You may be thinking: but that

-smallest-integer function is not a proper algorithm, since it is not

-guaranteed to halt in any finite amount of time for every argument.

-This is the famous [[!wikipedia Halting problem]]. But the fact that

-an implementation may not terminate doesn't mean that such a function

-isn't well-defined. The point of interest here is that its definition

-requires recursion in the function definition.)

+whatever function `f` we supply it, what is the smallest natural number `x`

+where `f x` is `true` (even if `f` itself is a function we do already know how to define).

Neither do the resources we've so far developed suffice to define the

-[[!wikipedia Ackermann function]]:

+[[!wikipedia Ackermann function]]. In OCaml:

- A(m,n) =

- | when m == 0 -> n + 1

- | else when n == 0 -> A(m-1,1)

- | else -> A(m-1, A(m,n-1))

+ let rec A = fun (m,n) ->

+ if m = 0 then n + 1

+ else if n = 0 then A(m-1,1)

+ else A(m-1, A(m,n-1));;

A(0,y) = y+1

A(1,y) = 2+(y+3) - 3

A(2,y) = 2(y+3) - 3

A(3,y) = 2^(y+3) - 3

- A(4,y) = 2^(2^(2^...2)) [where there are y+3 2s] - 3

+ A(4,y) = 2^(2^(2^...2)) (* where there are y+3 2s *) - 3

...

Many simpler functions always *could* be defined using the resources we've so far developed, although those definitions won't always be very efficient or easily intelligible.

@@ -191,54+188,59 @@ But functions like the Ackermann function require us to develop a more general t

###Fixed points###

-In general, a **fixed point** of a function `f` is any value `x`

-such that `f x` is equivalent to `x`. For example,

+In mathematics, a **fixed point** of a function `f` is any value `ξ`

+such that `f ξ` is equivalent to `ξ`. For example,

consider the squaring function `square` that maps natural numbers to their squares.

`square 2 = 4`, so `2` is not a fixed point. But `square 1 = 1`, so `1` is a

-fixed point of the squaring function.

+fixed point of the squaring function. (Can you think of another?)

There are many beautiful theorems guaranteeing the existence of a

fixed point for various classes of interesting functions. For

-instance, imainge that you are looking at a map of Manhattan, and you

-are standing somewhere in Manhattan. The the [[!wikipedia Brouwer

+instance, imagine that you are looking at a map of Manhattan, and you

+are standing somewhere in Manhattan. Then the [[!wikipedia Brouwer

fixed-point theorem]] guarantees that there is a spot on the map that is

directly above the corresponding spot in Manhattan. It's the spot

where the blue you-are-here dot should be.

-Whether a function has a fixed point depends on the set of arguments

+Whether a function has a fixed point depends on the domain of arguments

it is defined for. For instance, consider the successor function `succ`

that maps each natural number to its successor. If we limit our

attention to the natural numbers, then this function has no fixed

point. (See the discussion below concerning a way of understanding

-the successor function on which it does have a fixed point.)

+the successor function on which it *does* have a fixed point.)

-In the Lambda Calculus, we say a fixed point of a term `f` is any term `X` such that:

+In the Lambda Calculus, we say a fixed point of a term `f` is any term `ξ` such that:

- X <~~> f X

+ ξ <~~> f ξ

+

+This is a bit different than the general mathematical definition, in that here we're saying it is *terms* that are fixed points, not *values*. We like to think that some lambda terms represent values, such as our term `\f z. z` representing the numerical value zero (and also the truth-value false, and also the empty list... on the other hand, we never did explicitly agree that those three values are all the same thing, did we?). But some terms in the Lambda Calculus don't even have a normal form. We don't want to count them as values. But the way we're proposing to use the notion of a fixed point here, they too are allowed to be fixed points, and to have fixed points of their own.

+

+Note that `M <~~> N` doesn't entail that `M` and `N` have a normal form (though if they do, they will have the same normal form). It just requires that there be some term that they both reduce to. It may be that that term itself never stops being reducible.

You should be able to immediately provide a fixed point of the

-identity combinator I. In fact, you should be able to provide a

+identity combinator `I`. In fact, you should be able to provide a

whole bunch of distinct fixed points.

With a little thought, you should be able to provide a fixed point of

-the false combinator, KI. Here's how to find it: recall that KI

-throws away its first argument, and always returns I. Therefore, if

-we give it I as an argument, it will throw away the argument, and

-return I. So KII ~~> I, which is all it takes for I to qualify as a

-fixed point of KI.

+the false combinator, `KI`. Here's how to find it: recall that `KI`

+throws away its first argument, and always returns `I`. Therefore, if

+we give it `I` as an argument, it will throw away the argument, and

+return `I`. So `KII` ~~> `I`, which is all it takes for `I` to qualify as a

+fixed point of `KI`.

-What about K? Does it have a fixed point? You might not think so,

+What about `K`? Does it have a fixed point? You might not think so,

after trying on paper for a while.

-However, it's a theorem of the Lambda Calculus that every formula has

-a fixed point. In fact, it will have infinitely many, non-equivalent

+However, it's a theorem of the Lambda Calculus that *every* lambda term has

+a fixed point. Even bare variables like `x`! In fact, they will have infinitely many, non-equivalent

fixed points. And we don't just know that they exist: for any given

formula, we can explicit define many of them.

-Yes, as we've mentioned, even the formula that you're using the define

-the successor function will have a fixed point. Isn't that weird?

+As we've mentioned, even the formula that you're using the define

+the successor function will have a fixed point. Isn't that weird? There's some `ξ` such that it is equivalent to `succ ξ`?

Think about how it might be true. We'll return to this point below.

+

###How fixed points help define recursive functions###

Recall our initial, abortive attempt above to define the `length` function in the Lambda Calculus. We said "What we really want to do is something like this:

-`Θ′` has the advantage that `f (Θ′ f)` really *reduces to* `Θ′ f`. Whereas `f (Y′ f)` is only *convertible with* `Y′ f`; that is, there's a common formula they both reduce to. For most purposes, though, either will do.

+Applying either of these to a term `f` gives a fixed point `ξ` for `f`, meaning that `f ξ` <~~> `ξ`. `Θ′` has the advantage that `f (Θ′ f)` really *reduces to* `Θ′ f`. Whereas `f (Y′ f)` is only *convertible with* `Y′ f`; that is, there's a common formula they both reduce to. For most purposes, though, either will do.

You may notice that both of these formulas have eta-redexes inside them: why can't we simplify the two `\n. u u f n` inside `Θ′` to just `u u f`? And similarly for `Y′`?

@@ -482,9+485,9 @@ When you try to evaluate the application of that to some argument `M`, it's goin

where `self` is equivalent to the very formula `\n. self n` that contains it. So the evaluation will proceed:

(\n. self n) M ~~>

- self M ~~>

+ self M <~~>

(\n. self n) M ~~>

- self M ~~>

+ self M <~~>

...

You've written an infinite loop!

@@ -565,7+568,7 @@ You should be able to see that `sink` will consume as many `true`s as

we throw at it, then turn into the identity function after it

encounters the first `false`.

-The key to the recursion is that, thanks to Y, the definition of

+The key to the recursion is that, thanks to `Y`, the definition of

`sink` contains within it the ability to fully regenerate itself as

many times as is necessary. The key to *ending* the recursion is that

the behavior of `sink` is sensitive to the nature of the input: if the

@@ -590,18+593,17 @@ factorial of `n-1`. But if we leave out the base case, we get

3! = 3 * 2! = 3 * 2 * 1! = 3 * 2 * 1 * 0! = 3 * 2 * 1 * 0 * -1! ...

-That's why it's crucial to declare that 0! = 1, in which case the

+That's why it's crucial to declare that `0!` = `1`, in which case the

recursive rule does not apply. In our terms,

- fac = Y (\fac n. zero? n 1 (fac (predecessor n)))

+ fact = Y (\fact n. zero? n 1 (fact (predecessor n)))

-If `n` is 0, `fac` reduces to 1, without computing the recursive case.

+If `n` is `0`, `fact` reduces to `1`, without computing the recursive case.

-Curry originally called `Y` the paradoxical combinator, and discussed

+Curry originally called `Y` the "paradoxical" combinator, and discussed

it in connection with certain well-known paradoxes from the philosophy

-literature. The truth teller paradox has the flavor of a recursive

-function without a base case: the truth-teller paradox (and related

-paradoxes).

+literature. The truth-teller paradox has the flavor of a recursive

+function without a base case:

(1) This sentence is true.

@@ -617,6+619,8 @@ assume that sentences can have for their meaning boolean functions

like the ones we have been working with here. Then the sentence *John

is John* might denote the function `\x y. x`, our `true`.

+<!-- Jim says: I haven't yet followed the next chunk to my satisfaction -->

+

Then (1) denotes a function from whatever the referent of *this

sentence* is to a boolean. So (1) denotes `\f. f true false`, where

the argument `f` is the referent of *this sentence*. Of course, if

@@ -635,10+639,10 @@ sentence in which it occurs, the sentence denotes a fixed point for