Monday, December 5, 2016

I'm releasing a new configuration language named Dhall with Haskell bindings. Even if you don't use Haskell you might still find this language interesting.

This language started out as an experiment to answer common objections to programmable configuration files. Almost all of these objections are, at their root, criticisms of Turing-completeness.

For example, people commonly object that configuration files should be easy to read, but they descend into unreadable spaghetti if you make them programmable. However, Dhall doesn't have this problem because Dhall is a strongly normalizing language, which means that we can reduce every expression to a standard normal form in a finite amount of time by just evaluating everything.

a field named numberOfCharacters that stores a Natural number (i.e. a non-negative number)

a field named protagonist that stores an Optional record with a name and occupation

From this type alone, we know that no matter how complex our configuration file gets the program will always evaluate to a simple record. This type places an upper bound on the complexity of the program's normal form.

In other words, our compiler cut through all the noise and gave us an abstraction-free representation of our configuration.

Total programming

You can also evaluate configuration files written in other languages, too, but Dhall differentiates itself from other languages by offering several stronger guarantees about evaluation:

Dhall is not Turing complete because evaluation always halts

You can never write a configuration file that accidentally hangs or loops indefinitely when evaluated

Note that you can still write a configuration file that takes longer than the age of the universe to compute, but you are much less likely to do so by accident

Dhall is safe, meaning that functions must be defined for all inputs and can never crash, panic, or throw exceptions

Dhall is sandboxed, meaning that the only permitted side effect is retrieving other Dhall expressions by their filesystem path or URL

There are examples of this in the above program where Dhall retrieves two functions from the Prelude by their URL

Dhall's type system has no escape hatches

This means that we can make hard guarantees about an expression purely from the expression's type

Dhall can normalize functions

For example, Dhall's Prelude provides a replicate function which builds a list by creating N copies of an element. Check out how we can normalize this replicate function before the function is even saturated:

By default Dhall gives a concise summary of what broke. The error message begins with a trail of breadcrumbs pointing to which file in your import graph is broken:

↳ ./function

In this case, the error is located in the ./function file that we imported.

Then the next part of the error message is a context that prints the types of all values that are in scope:

f : 1

... which says that only value named f is in scope and f has type 1 (Uh oh!)

The next part is a brief summary of what went wrong:

Error: Not a function

... which says that we are using something that's not a function

The compiler then prints the code fragment so we can see at a glance what is wrong with our code before we even open the file:

f False

The above fragment is wrong because f is not a function, but we tried to apply f to an argument.

Finally, the compiler prints out the file, column, and line number so that we can jump to the broken code fragment and fix the problem:

function:1:19

This says that the problem is located in the file named function at row 1 and column 19.

Detailed error messages

But wait, there's more! You might have noticed this line at the beginning of the error message:

Use "dhall --explain" for detailed errors

Let's add the --explain flag to see what happens:

$ dhall --explain <<< "./function ./switch"
↳ ./function
f : 1
Error: Not a function
Explanation: Expressions separated by whitespace denote function application,
like this:
┌─────┐
│ f x │ This denotes the function ❰f❱ applied to an argument named ❰x❱
└─────┘
A function is a term that has type ❰a → b❱ for some ❰a❱ or ❰b❱. For example,
the following expressions are all functions because they have a function type:
The function's input type is ❰Bool❱
⇩
┌───────────────────────────────┐
│ λ(x : Bool) → x : Bool → Bool │ User-defined anonymous function
└───────────────────────────────┘
⇧
The function's output type is ❰Bool❱
The function's input type is ❰Natural❱
⇩
┌───────────────────────────────┐
│ Natural/even : Natural → Bool │ Built-in function
└───────────────────────────────┘
⇧
The function's output type is ❰Bool❱
The function's input kind is ❰Type❱
⇩
┌───────────────────────────────┐
│ λ(a : Type) → a : Type → Type │ Type-level functions are still functions
└───────────────────────────────┘
⇧
The function's output kind is ❰Type❱
The function's input kind is ❰Type❱
⇩
┌────────────────────┐
│ List : Type → Type │ Built-in type-level function
└────────────────────┘
⇧
The function's output kind is ❰Type❱
Function's input has kind ❰Type❱
⇩
┌─────────────────────────────────────────────────┐
│ List/head : ∀(a : Type) → (List a → Optional a) │ A function can return
└─────────────────────────────────────────────────┘ another function
⇧
Function's output has type ❰List a → Optional a❱
The function's input type is ❰List Text❱
⇩
┌────────────────────────────────────────────┐
│ List/head Text : List Text → Optional Text │ A function applied to an
└────────────────────────────────────────────┘ argument can be a function
⇧
The function's output type is ❰Optional Text❱
An expression is not a function if the expression's type is not of the form
❰a → b❱. For example, these are not functions:
┌─────────────┐
│ 1 : Integer │ ❰1❱ is not a function because ❰Integer❱ is not the type of
└─────────────┘ a function
┌────────────────────────┐
│ Natural/even +2 : Bool │ ❰Natural/even +2❱ is not a function because
└────────────────────────┘ ❰Bool❱ is not the type of a function
┌──────────────────┐
│ List Text : Type │ ❰List Text❱ is not a function because ❰Type❱ is not
└──────────────────┘ the type of a function
You tried to use the following expression as a function:
↳ f
... but this expression's type is:
↳ 1
... which is not a function type
────────────────────────────────────────────────────────────────────────────────
f False
function:1:19

We get a brief language tutorial explaining the error message in excruciating detail. These mini-tutorials target beginners who are still learning the language and want to better understand what error messages mean.

Every type error has a detailed explanation like this and these error messages add up to ~2000 lines of text, which is ~25% of the compiler's code base.

Tutorial

The compiler also comes with an extended tutorial, which you can find here:

This tutorial is also ~2000 lines long or ~25% of the code base. That means that half the project is just the tutorial and error messages and that's not even including comments.

Design goals

Programming languages are all about design tradeoffs and the Dhall language uses the following guiding principles (in order of descending priority) that help navigate those tradeoffs:

Polish

The language should delight users. Error messages should be fantastic, execution should be snappy, documentation should be excellent, and everything should "just work".

Simplicity

When in doubt, cut it out. Every configuration language needs bindings to multiple programming languages, and the more complex the configuration language the more difficult to create new bindings. Let the host language that you bind to compensate for any missing features from Dhall.

Beginner-friendliness

Dhall needs to be a language that anybody can learn in a day and debug with little to no assistance from others. Otherwise people can't recommend Dhall to their team with confidence.

Robustness

A configuration language needs to be rock solid. The last thing a person wants to debug is their configuration file. The language should never hang or crash. Ever.

Consistency

There should only be one way to do something. Users should be able to instantly discern whether or not something is possible within the Dhall language or not.

The dhall configuration language is also designed to negate many of the common objections to programmable configuration files, such as:

"Config files shouldn't be Turing complete"

Dhall is not Turing-complete. Evaluation always terminates, no exceptions

"Configuration languages become unreadable due to abstraction and indirection"

Every Dhall configuration file can be reduced to a normal form which eliminates all abstraction and indirection

"Users will go crazy with syntax and user-defined constructs"

Dhall is a very minimal programming language. For example: you cannot even compare strings for equality (yes, really). The language also forbids many other common operations in order to force users to keep things simple.

Conclusion

You should read the tutorial if you would like to learn more about the language or use Dhall to configure your own projects:

If you would like to contribute, you can try porting Dhall to bind to languages other than Haskell, so that Dhall configuration files can be used across multiple languages. I keep the compiler simple (less than ~4000 lines of code if you don't count error messages) so that people can port the language more easily.

Thursday, October 27, 2016

A couple of days ago I was surprised that FiveThirtyEight gave Trump a 13.7% chance of winning, which seemed too high to be consistent with the state-by-state breakdowns. After reading their methodology I learned that this was due to them not assuming that state outcomes were independent. In other words, if one swing state breaks for Trump this might increase the likelihood that other swing states also break for Trump.

However, I still wanted to do the exercise to ask: what would be Hillary's chance of winning if each state's probability of winning was truly independent of one another? Let's write a program to find out!

Raw data

A couple of days ago (2016-10-24) I collected the state-by-state data from FiveThirtyEight's website (by hand) and recorded:

Maine apportions two of its electoral votes based on a state-wide vote (i.e. "Maine" in the above list) and then two further electoral votes are apportioned based on two districts (i.e. "Maine - 1" and "Maine - 2". FiveThirtyEight computes the probabilities for each subset of electoral votes, so we just record them separately.

Combinatorial explosion

So how might we compute Hillary's chances of winnings assuming the independence of each state's outcome?

One naïve approach would be to loop through all possible electoral outcomes and compute the probability and electoral vote for each outcome. Unfortunately, that's not very efficient since the number of possible outcomes doubles with each additional entry in the list:

>>>2^ length probabilities
72057594037927936

... or approximately 7.2 * 10^16 outcomes. Even if I only spent a single CPU cycle to compute each outcome (which is unrealistic) on a 2.5 GHz processor that would take almost a year to compute them all. The election is only a couple of weeks away so I don't have that kind of time or computing power!

Distributions

Fortunately, we can do much better than that! We can efficiently solve this using a simple "divide-and-conquer" approach where we subdivide the large problem into smaller problems until the solution is trivial.

The central data structure we'll use is a probability distribution which we'll represent as a Vector of Doubles:

This Vector will always have 539 elements, one element per possible final electoral vote count that Hillary might get. Each element is a Double representing the probability of that corresponding electoral vote count. We will maintain an invariant that all the probabilities (i.e. elements of the Vector) must sum to 1.

For example, if the Distribution were:

[1, 0, 0, 0, 0... ]

... that would represent a 100% chance of Hillary getting 0 electoral votes and a 0% chance of any other outcome. Similarly, if the Distribution were:

[0, 0.5, 0, 0.5, 0, 0, 0... ]

... then that would represent a 50% chance of Hillary getting 1 electoral vote and a 50% chance of Hillary getting 3 electoral votes.

In order to simplify the problem we need to subdivide the problem into smaller problems. For example, if I want to compute the final electoral vote probability distribution for all 50 states perhaps we can break that down into two smaller problems:

Split the 50 states into two sub-groups of 25 states each

Compute an electoral vote probability distribution for each sub-group of 25 states

Combine probability distributions for each sub-group into the final distribution

In order to do that, I need to define a function that combines two smaller distributions into a larger distribution:

The combine function takes two input distributions named xs and ys and generates a new distribution named zs. To compute the probability of getting i electoral votes in our composite distribution, we just add up all the different ways we can get i electoral votes from the two sub-distributions.

For example, to compute the probability of getting 4 electoral votes for the entire group, we add up the probabilities for the following 5 outcomes:

We get 0 votes from our 1st group and 4 votes from our 2nd group

We get 1 votes from our 1st group and 3 votes from our 2nd group

We get 2 votes from our 1st group and 2 votes from our 2nd group

We get 3 votes from our 1st group and 1 votes from our 2nd group

We get 4 votes from our 1st group and 0 votes from our 2nd group

The probabilityOfEachOutcome function computes the probability of each one of these outcomes and then the totalProbability function sums them all up to compute the total probability of getting i electoral votes.

We can also define an empty distribution representing the probability distribution of electoral votes given zero states:

In other words, Clinton has a 99.3% chance of winning if each state's outcome is independent of every other outcome. This is significantly higher than the probability estimated by FiveThirtyEight at that time: 86.3%.

These results differ for the same reason I noted above: FiveThirtyEight assumes that state outcomes are not necessarily independent and that a Trump in one state could correlate with Trump wins in other states. This possibility of correlated victories favors the person who is behind in the race.

As a sanity check, we can also verify that the final probability distribution has probabilities that add up to approximately 1:

This is a significant improvement over a year's worth of running time.

We could even speed this up further using parallelism. Thanks to our divide and conquer approach we can subdivide this problem among up to 53 CPUs to accelerate the solution. However, after a certain point the overhead of splitting up the work might outweigh the benefits of parallelism.

Monoids

People more familiar with Haskell will recognize that this solution fits cleanly into a standard Haskell interface known as the Monoid type class. In fact, many divide-and-conquer solutions tend to be Monoids of some sort.

... and the Monoid class has three rules that every implementation must obey, which are known as the "Monoid laws".

The first rule is that mappend (or the equivalent (<>) operator) must be associative:

x <> (y <> z) = (x <> y) <> z

The second and third rules are that mempty must be the "identity" of mappend, meaning that mempty does nothing when combined with other values:

mempty <> x = x
x <> mempty = x

A simple example of a Monoid is integers under addition, which we can implement like this:

instanceMonoidIntegerwhere
mappend = (+)
mempty =0

... and this implementation satisfies the Monoid laws thanks to the laws of addition:

(x + y) + z = x + (y + z)
0+ x = x
x +0= x

However, Distributions are Monoids, too! Our combine and empty definitions both have the correct types to implement the mappend and mempty methods of the Monoid typeclass, respectively:

instanceMonoidDistributionwhere
mappend = combine
mempty = empty

Both mappend and mempty for Distributions satisfy the Monoid laws:

mappend is associative (Proof omitted)

mempty is the identity of mappend

We can prove the identity law using the following rules for how Vectors behave:

-- These rules assume that all vectors involved have 539 elements-- If you generate a vector by just indexing into another vector, you just get back-- the other vector
Data.Vector.Unboxed.generate 539 (Data.Vector.Unboxed.unsafeIndex xs) = xs
-- If you index into a vector generated by a function, that's equivalent to calling-- that function
Data.Vector.unsafeIndex (DataVector.generate 539 f) i = f i

Equipped with those rules, we can then prove that mappend xs mempty = xs

... and then in a separate terminal you can hit each endpoint with curl as illustrated above.

GHC Generics

The Haskell magic is in this one line of code:

deriving (Generic, ParseRecord)

This auto-generates code that tells the server how to marshal the route and query parameters into our Command data type. All we have to do is supply a handler that pattern matches on the incoming Command to decide what to do:

The server uses backtracking when parsing the route so the server knows when the Ints end and the Text begins.

Types

The whole thing is strongly typed, which means several things in the context of service programming.

For example, if you define a data type that expects an Int field, then by golly your handler will get an Int field or the server will automatically reject the request for you. You don't have to worry about checking that the field is present nor do you need to validate that the parameter decodes to an Int correctly. If you want the parameter to be optional then you need to make that explicit by marking the field as type Maybe Int.

You also don't have to handle fields that belong to other endpoints. Each endpoint only gets exactly the fields it requested; no more, no less. If a given endpoint gets the wrong set of path tokens or query parameters then the server rejects the request for you.

This is also strongly typed in the sense that more logic is pushed into the type and less logic goes in the handler. If you want just the first or last occurrence of a query parameter, you just annotate the type with First or Last, respectively. The more logic you push into declarative type-level programming the more you distill your handler to focus on business logic.

Caveats

I wrote this library to provide a quick an easy way to spin up Haskell web services but the library could still use some improvement. I'm not really a web developer so I only kind of know what I'm doing and could use help from people more knowledgeable than me.

The most notable deficiency is that the library does not take care to serve proper HTTP status codes for different types of errors. Every failed request returns a 404 status code.

Also, if the route is malformed the error message is a completely unhelpful "404 Not Found" error message that doesn't indicate how to fix the error.

Another blatant deficiency is that the server completely ignores the request method. I wasn't sure how to design this to work within the framework of data type generic programming.

If you have ideas about how to improve things I would greatly welcome any contributions.

Conclusions

People familiar with Haskell will recognize that this library resembles the servant library in some respects. The high-level difference is that this is a subset of servant that is much simpler but also significantly less featureful. For example, servant can also generate client-side bindings and Swagger resource declarations and servant also permits a much greater degree of customization.

This library focuses primarily on simple quick-and-dirty services; for anything more polished and production-ready you will probably want to try other Haskell service libraries. I just wanted to make it as easy as possible for people to get started with back-end development in Haskell and also show off how cool and powerful GHC generics can be.

If you would like to learn more about this library you can read the tutorial or if you would like to use the library you can obtain the code from Hackage or Github.

Sunday, July 3, 2016

Currently, Hackage has four implementations of "ListT-done-right" that I'm aware of:

LogicT

pipes (which provides a ListT type)

list-t

List

However, I felt that all of these libraries were more complex than they needed to be so I tried to distill them down to the simplest library possible. I want to encourage more people to use ListT so I'm releasing the beginner-friendly list-transformer library .

There are a few design choices I made to improve the new user experience for this library:

First, the ListT data type is not abstract and users are encouraged to use constructors to both assemble and pattern match on ListT values. This helps them build an intuition for how ListT works under the hood since the type is small and not difficult to use directly:

Second, the API is tiny in order to steer users towards leaning on Haskell's "collections API" as much as possible. Specifically, I try to direct people towards these type classes:

Functor/Applicative/Monad

Alternative/MonadPlus

MonadTrans/MonadIO

Right now there are only three functions in the API that are not class methods:

runListT

fold

foldM

Everything else is a method for one of the standard type classes and I avoid introducing new type classes.

Third, the API does not provide convenient helper functions for fully materializing the list. In other words, there is no utility function of this type:

toList ::ListT m a -> m [a]

A determined user can still force the list either indirectly via the fold function or by manually recursing over the ListT type. The goal is not to forbid this behavior, but rather to gently encourage people to preserve streaming. The API promotes the intuition that you're supposed to transform and consume results one-by-one instead of in large batches.

Fourth, the library comes with a long inline tutorial which is longer than the actual code. I think the tutorial could still use some improvement so if you would like to contribute to improve the tutorial please do!

Conclusion

You can find the list-transformer library either on Hackage or on Github.

Hopefully this encourages more people to give ListT a try and provides a stepping stone for understanding more complex streaming abstractoins.

This utility just provides a command-line API for Haskell's criterion benchmarking library. The bench tool wraps any shell command you provide in a a subprocess and benchmarks that subprocess through repeated runs using criterion. The number of runs varies between 10 to 10000 times depending on how expensive the command is.

This tool also threads through the same command-line options that criterion accepts for benchmark suites. You can see the full set of options using the --help flag:

Also produces something like the following chart which you can view in example.html:

You can also increase the time limit using the --time-limit option, which will in turn increase the number of runs for better statistics. For example, criterion warned me that I had too many outliers for my benchmarks, so I can increase the time limit for the above benchmark to 30 seconds:

Sunday, April 24, 2016

The title of this post is a play on the Lisp aphorism: "Code is Data". In the Lisp world everything is data; code is just another data structure that you can manipulate and transform.

However, you can also go to the exact opposite extreme: "Data is Code"! You can make everything into code and implement data structures in terms of code.

You might wonder what that even means: how can you write any code if you don't have any primitive data structures to operate on? Fascinatingly, Alonzo Church discovered a long time ago that if you have the ability to define functions you have a complete programming language. "Church encoding" is the technique named after his insight that you could transform data structures into functions.

This post is partly a Church encoding tutorial and partly an announcement for my newly released annah compiler which implements the Church encoding of data types. Many of the examples in this post are valid annah code that you can play with. Also, to be totally pedantic annah implements Boehm-Berarducci encoding which you can think of as the typed version of Church encoding.

This post assumes that you have basic familiarity with lambda expressions. If you do not, you can read the first chapter (freely available) of the Haskell Programming from First Principles which does an excellent job of teaching lambda calculus.

If you would like to follow along with these examples, you can download and install annah by following these steps:

Lambda calculus

In the untyped lambda calculus, you only have lambda expressions at your disposal and nothing else. For example, here is how you encode the identity function:

λx → x

That's a function that takes one argument and returns the same argument as its result.

We call this "abstraction" when we introduce a variable using the Greek lambda symbol and we call the variable that we introduce a "bound variable". We can then use that "bound variable" anywhere within the "body" of the lambda expression.

Any expression that begins with a lambda is an anonymous function which we can apply to another expression. For example, we can apply the the identity function to itself like this:

(λx → x) (λy → y)
-- β-reduction= λy → y

We call this "application" when we supply an argument to an anonymous function.

We can define a function of multiple arguments by nested "abstractions":

λx → λy → x

The above code is an anonymous function that returns an anonymous function. For example, if you apply the outermost anonymous function to a value, you get a new function:

(λx → λy → x) 1-- β-reduce
λy →1

... and if you apply the lambda expression to two values, you return the first value:

(λx → λy → x) 12-- β-reduce
(λy →1) 2-- β-reduce1

So our lambda expression behaves like a function of two arguments, even though it's really a function of one argument that returns a new function of one argument. We call this "currying" when we simulate functions of multiple arguments using functions one argument. We will use this trick because we will be programming in a lambda calculus that only supports functions of one argument.

Typed lambda calculus

In the typed lambda calculus you have to specify the types of all function arguments, so you have to write something like this:

λ(x : a) → x

... where a is the type of the bound variable named x.

However, the above function is still not valid because we haven't specified what the type a is. In theory, we could specify a type like Int:

λ(x :Int) → x

... but the premise of this post was that we could program without relying on any built-in data types so Int is out of the question for this experiment.

Fortunately, some typed variations of lambda calculus (most notably: "System F") let you introduce the type named a as yet another function argument:

λ(a :*) → λ(x : a) → x

This is called "type abstraction". Here the * is the "type of types" and is a universal constant that is always in scope, so we can always introduce new types as function arguments this way.

The above function is the "polymorphic identity function", meaning that this is the typed version of the identity function that still preserves the ability to operate on any type.

If we had built-in types like Int we could apply our polymorphic function to the type just like any other argument, giving back an identity function for a specific type:

(λ(a :*) → λ(x : a) → x) Int-- β-reduction
λ(x :Int) → x

This is called "type application" or (more commonly) "specialization". A "polymorphic" function is a function that takes a type as a function argument and we "specialize" a polymorphic function by applying the function to a specific type argument.

However, we are forgoing built-in types like Int, so what other types do we have at our disposal?

Well, every lambda expression has a corresponding type. For example, the type of our polymorphic identity function is:

∀(a :*) →∀(x : a) → a

You can read the type as saying:

this is a function of two arguments, one argument per "forall" (∀) symbol

the first argument is named a and a is a type

the second argument is named x and the type of x is a

the result of the function must be a value of type a

This type uniquely determines the function's implementation. To be totally pedantic, there is exactly one implementation up to extensional equality of functions. Since this function has to work for any possible type a there is only one way to implement the function. We must return x as the result, since x is the only value available of type a.

Passing around types as values and function arguments might seem a bit strange to most programmers since most languages either:

they use a different syntax for type abstraction/application versus ordinary abstraction and application

Example: Scala

-- The polymorphic identity function in Scala
def id[A](x : a)
-- Example use of the function
-- Note: Scala lets you omit the `[Boolean]` here thanks
-- to type inference but I'm making the type
-- application explicit just to illustrate that
-- the syntax is different from normal function
-- application
id[Boolean](true)

For the purpose of this post we will program with explicit type abstraction and type application so that there is no magic or hidden machinery.

So, for example, suppose that we wanted to apply the typed, polymorphic identity function to itself. The untyped version was this:

So we can still apply the identity function to itself, but it's much more verbose. Languages with type inference automate this sort of tedious work for you while still giving you the safety guarantees of types. For example, in Haskell you would just write:

(\x -> x) (\y -> y)

... and the compiler would figure out all the type abstractions and type applications for you.

Exercise: Haskell provides a const function defined like this:

const :: a -> b -> a
const x y = x

Translate const function to a typed and polymorphic lambda expression in System F (i.e. using explicit type abstractions)

Boolean values

Lambda expressions are the "code", so now we need to create "data" from "code".

One of the simplest pieces of data is a boolean value, which we can encode using typed lambda expressions. For example, here is how you implement the value True:

λ(Bool:*) → λ(True:Bool) → λ(False:Bool) →True

Note that the names have no significance at all. I could have equally well written the expression as:

λ(a :*) → λ(x : a) → λ(y : a) → x

... which is "α-equivalent" to the previous version (i.e. equivalent up to renaming of variables).

We will save the above expression to a file named ./True in our current directory. We'll see why shortly.

We are saving these terms and types to files because we can use the annah compiler to work with any lambda expression or type saved as a file. For example, I can use the annah compiler to verify that the file ./True has type ./Bool:

If the expression type-checks then annah will just compile the expression to lambda calculus (by removing the unnecessary type annotation in this case) and return a zero exit code. However, if the expression does not type-check:

annah does not evaluate the expression. annah only translates the expression into Morte code (and the expression is already valid Morte code) and type-checks the expression. If you want to evaluate the expression you need to run the expression through the morte compiler, too:

The above sequence of steps is a white lie: the true order of steps is actually different, but equivalent.

The ./if function was not even necessary because every value of type ./Bool is already a "pre-formed if expression". That's why ./if is just the identity function on ./Bools. You can delete the ./if from the above example and the code will still work.

We started with nothing but lambda expressions, but still managed to implement:

a ./Bool type

a ./True value of type ./Bool

a ./False value of type ./Bool

./if, ./not, ./and, and ./or functions

... and we can do real computation with them! In other words, we've modeled boolean data types entirely as code.

Exercise: Implement an xor function

Natural numbers

You might wonder what other data types you can implement in terms of lambda calculus. Fortunately, you don't have to wonder because the annah compiler will actually compile data type definitions to lambda expressions for you.

For example, suppose we want to define a natural number type encoded using Peano numerals. We can write:

All Boehm-Berarducci-encoded datatypes are encoded as substitution functions, including ./Nat. Any value of ./Nat is a function that takes three arguments that we will substitute into our natural number expression:

The first argument replace every occurrence of the Nat type

The second argument replaces every occurrence of the Succ constructor

The third argument replaces every occurrence of the Zero constructor

This will make more sense if we walk through a specific example. First, we will build the number 3 using the ./Nat/Succ and ./Nat/Zero constructors:

Now suppose that we want to compute whether or not our natural number is even. The only catch is that we must limit ourselves to substitution when computing even. We have to figure out something that we can substitute in place of the Succ constructors and something that we can substitute in place of the Zero constructors that will then evaluate to ./True if the natural number is even and ./False otherwise.

One substitution that works is the following:

Replace every Zero with ./True (because Zero is even)

Replace every Succ with ./not (because Succ alternates between even and odd)

So in other words, if we began with this:

./Nat/Succ (./Nat/Succ (./Nat/Succ ./Nat/Zero ))

... and we substitute with ./Nat/Succ with ./not and substitute ./Nat/Zero with ./True:

Now the next two arguments have exactly the right type for us to substitute in ./not and ./True. The argument named ./Succ is now a function of type ∀(pred : ./Bool ) → ./Bool, which is the same type as ./not. The argument named Zero is now a value of type ./Bool, which is the same type as ./True. This means that we can proceed with the next two arguments:

So we can encode natural numbers in lambda calculus, albeit very inefficiently! There are some tricks that we can use to greatly speed up both the time complexity and constant factors, but it will never be competitive with machine arithmetic. This is more of a proof of concept that you can model arithmetic purely in code.

Exercise: Implement a function which multiplies two natural numbers

Data types

annah also lets you define "temporary" data types that scope over a given expression. In fact, that's how Nat was implemented. You can look at the corresponding *.annah files to see how each type and term is defined in annah before conversion to morte code.

The first four lines are identical to what we wrote when we invoked the annah types command from the command line. We can use the exact same data type specification to create a scoped expression that can reference the type and data constructors we specified.

You can use these scoped datatype declarations to quickly check how various datatypes are encoded without polluting your current working directory. For example, I can ask annah how the type Maybe is encoded in lambda calculus:

A Maybe value is just another substitution function. You provide one branch that you substitute for Just and another branch that you substitute for Nothing. For example, the Just constructor always substitutes in the first branch and ignores the Nothing branch that you supply:

... but that doesn't really tell us anything about how annah desugars let because we only see the final evaluated result. We can ask annah to desugar without performing any other transformations using the annah desugar command:

Also, every expression has a corresponding *.annah file that documents the expression's type using a let expression. For example, we can see the type of the ./List/(++) function by studying the ./List/(++).annah file:

Conclusions

A lot of people underestimate how much you can do in a total lambda calculus. I don't recommend pure lambda calculus as a general programming language, but I could see a lambda calculus enriched with high-efficiency primitives to be a realistic starting point for simple functional languages that are easy to port and distribute.

One of the projects I'm working towards in the long run is a "JSON for code" and annah is one step along the way towards that goal. annah will likely not be that language, but I still factored out annah as a separate and reusable project along the way so that others could fork and experiment with annah when experimenting with their own language design.

Also, as far as I can tell annah is the only project in the wild that actually implements the Boehm-Berarducci encoding outlined in this paper:

The Annah tutorial goes into the Annah language and compiler in much more detail than this tutorial, so if you would like to learn more I highly recommend reading the tutorial which walks through the compiler, desugaring, and the Prelude in much more detail: