I've searched around the Internet, and I can't find any explanations of CHI which don't rapidly degenerate into a lecture on logic theory which is drastically over my head. (These people talk as if "intuitionistic proposition calculus" is a phrase that actually means something to normal humans!)

Roughly put, CHI says that types are theorems, and programs are proofs of those theorems. But what the hell does that even mean??

So far, I've figured this out:

Consider id :: x -> x. Its type says "given that X is true, we can conclude that X is true". Seems like a reasonable theorem to me.

Now consider foo :: x -> y. As any Haskell programmer will tell you, this is impossible. You cannot write this function. (Well, without cheating anyway.) Read as a theorem, it says "given that any X is true, we can conclude that any Y is true". This is obviously nonsense. And, sure enough, you cannot write this function.

More generally, the function's arguments can be considered "this which are assumed to be true", and the result type can be considered "thing that is true assuming all the other things are". If there's a function argument, say x -> y, we can take that as an assumption that X being true implies that Y must be true.

For example, (.) :: (y -> z) -> (x -> y) -> x -> z can be taken as "assuming that Y implies Z, that X implies Y, and that X is true, we can conclude that Z is true". Which seems logically sensible to me.

Now, what the hell does Int -> Int mean?? o_O

The only sensible answer I can come up with is this: If you have a function X -> Y -> Z, then the type signature says "assuming that it's possible to construct a value of type X, and another of type Y, then it is possible to construct a value of type Z". And the function body describes exactly how you would do this.

That seems to make sense, but it's not very interesting. So clearly there must be more to it than this...

3 Answers
3

The Curry-Howard isomorphism simply states that types in correspond to propositions, and values correspond to proofs.

Int -> Int doesn't really mean much interesting as a logical proposition. When interpreting something as a logical proposition, you're only interested in whether the type is inhabited (has any values) or not. So, Int -> Int just means "given an Int, I can give you an Int", and it is, of course, true. There are many different proofs of it (corresponding to various different functions of that type), but when taking it as a logical proposition, you don't really care.

That doesn't mean interesting propositions can't involve such functions; just that that particular type is quite boring, as a proposition.

For an instance of a function type that isn't completely polymorphic and that has logical meaning, consider p -> Void (for some p), where Void is the uninhabited type: the type with no values (except ⊥, in Haskell, but I'll get to that later). The only way to get a value of type Void is if you can prove a contradiction (which is, of course, impossible), and since Void means you've proved a contradiction, you can get any value from it (i.e. there exists a function absurd :: Void -> a). Accordingly, p -> Void corresponds to ¬p: it means "p implies falsehood".

Intuitionistic logic is just a certain kind of logic that common functional languages correspond to. Importantly, it's constructive: basically, a proof of a -> b gives you an algorithm to compute b from a, which isn't true in regular classical logic (because of the law of excluded middle, which will tell you that something is either true or false, but not why).

Even though functions like Int -> Int don't mean much as propositions, we can make statements about them with other propositions. For instance, we can declare the type of equality of two types (using a GADT):

data Equal a b where
Refl :: Equal a a

If we have a value of type Equal a b, then a is the same type of b: Equal a b corresponds to the proposition a = b. The problem is that we can only talk about equality of types this way. But if we had dependent types, we could easily generalise this definition to work with any value, and so Equal a b would correspond to the proposition that the valuesa and b are identical. So, for instance, we could write:

Here, f and g are regular functions, so f could easily have type Int -> Int. Again, Haskell can't do this; you need dependent types to do things like this.

Typical functional languages are not really well-suited to writing proofs, not only because they lack dependent types, but also because of ⊥, which, having type a for all a, acts as a proof of any proposition. But total languages like Coq and Agda exploit the correspondence to act as proof systems as well as dependently-typed programming languages.

This is a great answer. I think that people read "propositions as types" and immediately think their programs automatically produce interesting proofs, this is not the case, the reality is that any interesting proof about a program will involve the use of complicated dependent types forming propositions in intuitionistic logic.
–
Kristopher MicinskiApr 18 '12 at 15:37

2

@MathematicalOrchid: Yes, given that the function is total.
–
Niklas B.Apr 18 '12 at 15:43

Just because Agda and Coq are total, it doesn't make them "non-Turing-complete". You can write a Turing machine simulator in Agda: you just can't give it a type which promises that it will halt; you have to use a type which acknowledges the potentially infinite delay. From a logical perspective, an "infinitely delayable proof of X" should certainly not convince you of X. In Haskell, non-totality means that you can't be sure of X, even if you have a value in X. Totality buys you the CHI, and you don't have to give up Turing-completeness: you just have to give up exaggerating.
–
pigworkerApr 19 '12 at 5:59

4

Thanks. Of course, you need assistance from the RTS to do anything! Total languages offer the consumer the choice to run infinitary computations as much or as little as desired. Partial languages offer only the dangerous version. It's not true that "you can't write Haskell programs which interact with files", and it's not true that "you can't write Agda programs which run for ever". People genuinely seem to believe that totality is some sort of limitation in that respect, when the truth is that it's the partial languages which are limited in the promises they allow you to make.
–
pigworkerApr 19 '12 at 22:24

The only sensible answer I can come up with is this: If you have a function X -> Y -> Z, then the type signature says "assuming that it's possible to construct a value of type X, and another of type Y, then it is possible to construct a value of type Z". And the function body describes exactly how you would do this. That seems to make sense, but it's not very interesting. So clearly there must be more to it than this...

Well, yes, there is quite a lot more, because it has a lot of implications and opens up a lot of questions.

First of all, your discussion of the CHI is framed exclusively in terms of implication/function types (->). You don't talk about this, but you have probably have seen also how conjunction and disjunction correspond to product and sum types respectively. But what about other logical operators like negation, universal quantification and existential quantification? How do we translate logical proofs involving these to programs? It turns out that it's roughly like this:

Universal quantification over propositional (not individual) variables corresponds to parametric polymorphism. So for example, the polymorphic function id really has the type forall a. a -> a

Existential quantification over propositional variables corresponds to a handful of things that have to do with data or implementation hiding: abstract data types, module systems and dynamic dispatch. GHC's existential types are related to this.

Other than that, it also means that all sorts of proofs about logics instantly translate into proofs about programming languages. For example, the decidability of intuitionistic propositional logic implies termination of all programs in simply-typed lambda calculus.

Now, what the hell does Int -> Int mean?? o_O

It's a type, or alternatively a proposition. In f :: Int -> Int, (+1) names a "program" (in a specific sense that admits both of functions and constants as "programs", or alternatively a proof. The semantics of the language must either provide f as a primitive rule of inference, or demonstrate how f is a proof that can be constructed from such rules and premises.

These rules are often specified in terms of equational axioms that define the base members of a type and rules that allow you to prove what other programs inhabit that type. For example, switching from Int to Nat (natural numbers from 0 forward), we could have these rules:

Axiom: 0 :: Nat (0 is a primitive proof of Nat)

Rule: x :: Nat ==> Succ x :: Nat

Rule: x :: Nat, y :: Nat ==> x + y :: Nat

Rule: x + Zero :: Nat ==> x :: Nat

Rule: Succ x + y ==> Succ (x + y)

These rules are enough to prove many theorems about addition of natural numbers. These proofs will also be programs.

Perhaps the best way to understand what it means is to start (or try) using types as propositions and programs as proofs. It is better to learn a language with dependent types, such as Agda (it's written in Haskell and similar to Haskell). There are various articles and courses on that language. Learn you an Agda is incomplete, but it's try to simplify things, just like LYAHFGG book.

There you can see (m + n) + p ≡ m + (n + p) proposition as type and its proof as function. There are more advanced techniques for such proofs (e.g., preorder reasoning, combinators in Agda is like tactics in Coq).

Your compiler produce a program which execution gives the same value as the value obtained in the interpretation of the same (host) program, here, for Coq. This book is also a good reading on the topic of language modeling and program verification.

Anything else what can be proven in constructive mathematics, since Martin-Löf's type theory in its expressive power is equivalent to ZFC. In fact, Curry-Howard isomorphism can be extended to physics and topology and to algebraic topology.

This is an entire other question, of course, but one time I did try to comprehend one of those advanced languages. (I forget whether it was Agda, Coq, Epigram or what.) The documentation casually tosses around technical terms who's meaning is utterly opaque to me, and I rapidly became lost...
–
MathematicalOrchidApr 18 '12 at 20:40

1

@MathematicalOrchid CH is an isomorphism, it illuminates a relationship between concepts of two different domains. You can have an idea about one domain (simply-typed lambda calculus or Per Martin-Löf’s dependent type theory), but in order to construct an isomorphism you must also be aware of the other domain (system of natural deduction or intuitionistic logic, respectively).
–
JJJApr 18 '12 at 22:40