Monthly Archives: March 2009

I don’t have much time, so here’s just a quick jot. I don’t think System IG can prove this rather obvious proposition:

|- GL(\a. Ga(Ka)) (\a x. x)

Or translated to more conventional notation: (\a x. x) : (a : Type) -> (a -> a), the type of the polymorphic identity function. In order to prove this, we need the axiom LL (conventionally, Type : Type). But that axiom is inconsistent. So I’ve tweaked the rules a bit to allow the above to be proven, without allowing LL. It’s possible that these tweaks are still consistent; they don’t admit Girard’s paradox, at least.

The rules I’m modifying can be found on page 8 of the above paper, in the IG box. My tweak is, changing rule Gi from:

?, Xx |- Yx(Zx) ; ? |- LX ; x not in FV(?,X,Y,Z) => ? |- GXYZ

To:

?, Xx |- Yx(Zx) ; ? |- L(GXY) ; x not in FV(?,X,Y,Z) => ? |- GXYZ

I’m saying that you need to prove that GXY (conventionally, (x:X) -> Y x) is a type before you can say something has that type. Without more modifications, this is equivalent to the original. However, now I add two new rules:

? |- L(GL(KL))
? |- GL(KL)p => ? |- L(GLp)

These say: (Type -> Type) : Type, and if p : Type -> Type then ((x:Type) -> p x) : Type. The latter one is the one I’ve heard is still consistent, but you can’t really do anything with it without the former. So, it needs further investigation.

Trying to prove the consistency of this new system, I need a semantics for terms. If I can model these semantics in CIC (the calculus used by Coq), then if CIC is consistent, so is this system. My idea for semantics is as follows:

Bare lambda terms don’t really have semantics; they are modeled by their syntax. But typing proofs have the semantics of whatever they are proving as well-typed. If I use [-] as the meaning function, then the semantics of [Lx] is a type (because it is a proof that x is a type). Formally:

[Lx] : Type
[Gxyz] : (a:[Lx]) -> [Gx(KL)y] a

Simplified, the latter means that the semantics of a proof that x has type y -> z is a function from y to z. (Rather, it’s the dependent version of that). These semantics motivate the axioms I am adding.

Okay, that’s all the time I have!

Share this:

When working the latest incarnation of my System IG compiler, I used a thingy which I now realize ought to be characterized as a design pattern. It substantially changed the way I was thinking about the code, which is what makes it interesting.

Summary: separate an algorithm into certificate constructors and a search algorithm.

A large class of algorithms can be considered, in some way, as search algorithms. It is given a problem and searches for a solution to that problem. For example, typically you wouldn’t phrase the quadratic formula as a search algorithm, but it is—it’s just a very smart, fast one. It is given a,b, and c and searches for a solution to the equation ax2 + bx + c = 0.

The certificate design pattern separates the algorithm into two modules: the certificate module and the algorithm. The certificate module provides constructors for solutions to the problem. For each correct solution, it is possible to construct a certificate, and it is impossible to construct a certificate for an incorrect solution. The certificate module for the quadratic formula algorithm might look like this:

There is only one way to construct a Certificate, and that is to pass it a solution to the quadratic equation. If it is not actually a solution, a certificate cannot be constructed for it. This module is very easy to verify. The algorithm module is obvious:

Here, we use the quadratic formula and construct a certificate of its correctness. If we made a typo in the formula, then certify would return Nothing and we would get an error when we fromJust it (an error is justified in this case, rather than an exception, because we made a mistake when programming — it’s like an assert).

The client to the algorithm gets a certificate back from solve, and can extract its solution. All the information needed to verify that the certificate is a correct certificate for the given problem should be provided. For example, if Certificate had only contained x instead of a,b,c,x, then we could have implemented solve like:

solve a b c = certify 0 0 0 0

Because that is a valid solution, but we have not solved the problem. The client needs to be able to inspect that a,b,c match the input values. Maximally untrusting client code might look like this:

Here we can give any function whatsoever for solve, and we will never report an incorrect answer (replacing the incorrectness with a runtime error).

This is certainly overkill for this example, but in the System IG compiler it makes a lot of sense. I have a small set of rules which form well-typed programs, and have put in much effort to prove this set of rules is consistent and complete. But I want to experiment with different interfaces, different inference algorithms, different optimizations, etc.

So my Certificate implements combinators for each of the rules in my system, and all the different algorithms plug into that set of rules. So whenever I write a typechecker algorithm, if it finds a solution, the solution is correct by construction. This gives me a lot of freedom to play with different techniques.

Verification rules can be more involved than this single function that constructs a certificate. In the System IG compiler, there are 12 construction rules, most of them taking other certificates as arguments (which would make them certificate “combinators”). I’ll show an example of more complex certificate constructors later.

What is interesting about this pattern, aside from the added correctness and verification guarantees, is that is changed the way I thought while I was implementing the algorithm. Instead of being master of the computer, and telling it what to do, it was more like a puzzle I had to solve. In some ways it was harder, but I attribute that to redistributing the workload; it’s harder because I am forced to write code that is correct from the get-go, instead of accidentally introducing bugs and thinking I’m done.

The other interesting mental change was that it often guided my solution. I would look at the certificate I’m trying to create, and see which constructors could create it. This gave me an idea of the information I was after. This information is the information necessary to convince the client that my solution is correct; I cannot proceed without it.

Theoretically, the algorithm part could be completely generic. It might just do a generic search algorithm like Dijkstra. If it finds a certificate, then it has solved your problem correctly. Solutions for free! (But this will not be practical in most cases — it might not yield a correct algorithm by other criteria, such as “always halts”).

Here’s an example of a more complex certificate. The domain is SK combinator calculus, and a Conversion is a certificate that holds two terms. If a Conversion can be constructed, then the two terms are convertible.

Share this:

It has been a little while since I posted anything. So here’s a general update about my work and other aspects of my life.

I’ve been passively thinking about Dana, but not doing very much significant. I have abandoned the RTS language on the grounds that it is too hard for my little mind right now. Hopefully I will come back to it, but it can be safely moved further down in the queue without hurting my progress.

That brings me back to the dependent typed core. I’ve switched directions and decided that it really ought to be total. In experimenting with how to bring back partiality to a total language, it finally sunk in how partiality is a monad in its essence. It is in the very same way Maybe is, though my practical ideas about the essence of nontermination prevented me from seeing that.

I’ve been fooling around with the semantics of system IG, and have gotten nowhere interesting. Whatever beautiful connection between application and typing there is, I haven’t seen it yet.

Since typechecking arbitrary terms in system IG is undecidable, I’ve been trying to write an “interactive typechecker”: essentially a library for typechecking. Various inference algorithms can be built on top, but their correctness follows from the underlying library. I am stuck on the representation of conversion proofs between lambda terms.

All of these are rather minor problems that can be worked through with a little thought. I’ve been distracted, however, because of a woman named Anna, with whom I’ve been having at least two hour conversations almost every night. I had forgotten what it meant to have good conversations about nontechnical things (it’s really nice). We seem to like each other, despite my friend’s warnings about her being insane or something :-). I’m interested to see where it goes.

Oh and I’m starting school again in the summer, to finish by Bachelors in mathematics. The intention being to go do a PhD program with Paul Hudak, if that can be arranged, or something similar.

More meat soon.

Share this:

Over the last week, I took a break from Dana and wrote a Geometry Wars-style action game. The main idea the game explores is virtuosity. Essentially there is so much to control that our puny human brains cannot comprehend it all for a long time; the game is engineered to have a very long learning curve.

“Bomb enemies” (more precisely, enemies with a bomb component) explode in a few units radius, and if you are nearby, you die. On the other hand, any other enemies nearby also die, and because you control the spawn location of new enemies (they always spawn opposite the direction you are shooting), you can use this to your advantage.

“Strong enemies” take three hits to kill, so generally you don’t want them around. However, if you have the prism gun, then the strong enemies refract and split your bullets, making many more of them. So there is incentive to keep the strong enemies around to use to your advantage if you pick up a prism gun.

Usually there is too much going on to keep it all under control. So you can shoot “time rings” (ripped off from Braid) which slow down the world around them, to keep the part you are not currently dealing with from evolving too fast. Using it effectively requires foresight and a good idea of what’s going on on the entire screen, in addition to the “micro dynamics” of shooting and dodging.

Each of the enemies’ sound effects layer together rhythmically to create the music for the game.

I’m pretty happy with this game. It is fun and addictive, and like Geometry Wars has an affinity for inducing flow states and causing you to do things you didn’t know you were capable of. Be warned though — this game is not for the faint of heart: it is very difficult. If you feel like devoting a few hours to a game to learn it well enough not to die in the first 30 seconds, this is the game for you :-). That is, after all, the point. (Geometry Wars players will have an easier time, because the basic mechanic is the same, so they can start on the second layer) My high score is 2715, consider that a challenge to beat it! :-)

Best played with an XBox 360 controller. There is a keyboard and mouse mode (keys: wasd, 123, spacebar), but I have not playtested it very much in that mode, because I’m developing on a laptop with a touchpad. Gross. To use the keyboard and mouse mode, you need to enable that control scheme in the options.