Monday, January 21, 2013

I fell in love with Haskell neither because of types nor functional programming. Rather, I admired Haskell's beautiful approach to I/O and I hope that after reading this you will, too. I'm writing this IO tutorial to underscore Haskell's simplicity and consistency.

If you want to follow along, download the Haskell Platform, which provides a Haskell compiler (ghc) and Haskell interpreter (ghci).

Hello, world!

Let's begin from the simplest possible Haskell program:

main = putStrLn "Hello, world"

If you save the above program to example.hs, you can compile and run it using:

putStrLn is a function that takes a single argument of type String and produces a value of type IO (). The IO () signifies an executable action that returns a value of type (). In this case, the action just print the given String, and putStrLn is just a mnemonic for "putString with newLine.

All IO actions must return a value, so Haskell programmers return () when they have nothing useful to return. This exactly mirrors how imperative programmers mark a function void when they don't return anything useful. For example, if we were to translate putStrLn's type to other languages, it would look like:

This tells us what we already knew: when we supply putStrLn with a String argument we get an executable IO action which will print the supplied String. All of this so far is very straight-forward and we haven't deviated yet from traditional programming languages.

Equality

Now let's compare our original program side-by-side with equivalent main functions in imperative languages:

One difference should stand out: the Haskell version does not use traditional block syntax for defining the main function. Instead it uses the equals sign, which may seems quite curious to imperative eyes.

It turns out that Haskell takes equality very seriously. When you write:

x = 5

... you simply declare x to be synonymous with 5. You cannot change the value of x any more than you can change the value of the number 5. x and 5 become completely interchangeable since they both refer to the same thing. They are equal.

So when we write:

main = putStrLn "Hello, world!"

... all that says is that main is nothing more than a synonym for putStrLn "Hello, world!". Therefore, they must have the same type and we can prove this by loading our program into ghci:

The compiler tells us that a Char is not executable, which makes sense! Notice that we rejected the above main not on the basis of syntax or grammar, but solely on the basis of its type. The type alone communicates whether or not something is executable. If something has type is IO a, then it is executable. If something does not have type IO a, then it is not executable.

Wait... How does main do more than one thing, then? putStrLn "Hello, world" has type IO (), so does that mean that it uses up all of main's "executable juice"? Not at all!

do notation

Haskell lets you combine multiple IO actions into a single IO action using do notation. I'll introduce the following two functions to demonstrate this:

We can feed the String return value of getLine into putStrLn using do notation:

main = do str
Haskell also supports semicolons and braces for those who don't like significant whitespace:

main = do {
str
do notation combines these two IO actions into a single IO action. This combined action prompts the user for one line of input and then echoes it back. Let's try it!

$ ./example
Test<Enter>
Test

We can also sequence two IO actions like this:

main = do putStrLn "Hello,"
putStrLn "world!"

./example
Hello,
world!

We can combine as many actions as we please:

main = do putStrLn "Enter a string:"
str

$ ./example
Enter a string:
Apple<Enter>
Apple

Our do block only lets us combine IO commands, though. If we attempt to insert a non-IO statement, we get a type error:

main = do putStrLn "Enter a string:"
str

example.hs:3:11:
Couldn't match expected type `IO b0' with actual type `[Char]'
In a stmt of a 'do' block: "Hello"
In the expression:
do { putStrLn "Enter a string:";
str
The first line says it all. The do block expected another executable command on the last line, but we gave it a String, which isn't executable. That's a pretty reasonable type error if you ask me.

Types - Part 3

Haskell has one and only one rule for deciding if a do block is valid:

-- Given:
m :: IO a
f :: a -> IO b
-- Then the following 'do' block is valid
block :: IO b -- This must match the return value of 'f'
block = do a
do blocks are first class expressions in Haskell. Like all expressions, we can ask for ghci for their type:

Prelude> :t do { str
putStrLn str was the last action and it's type is IO (), so according to the rule the whole block has the same type: IO ().

Now, why on earth should a do block have a return value at all? That would only make sense if we could use a do block within another do block. Does that work?

main = do str

$ ./example
Enter a string:
Apple<Enter>
Apple

Huh.

do blocks use the same return value as their last command because they actually return the last command's value if you bind them within a larger do block. In the above example, the inner do block forwards the String returned by getLine to the outer do block, and that String is bound to the str variable.

This approach maintains consistency in the face of refactorings. Let's define a synonym for the inner do block so we can refer to it by name:

main = do str
prompt's type signature is entirely for our own benefit. Remember that the compiler can deduce the type mechanically. Haskell programmers still like to include these type signatures to organize their thoughts or to clearly communicate their code's intention to other programmers. prompt's type signature tells other programmers "This is an executable action that retrieves a String".

Impurity

Notice how we don't use the equals sign when we store the result of IO actions. Instead we use the symbol:

main = do str
That's because str is not truly equal to prompt. In fact, they don't even have the same type. str has type String and prompt has type IO String. When we mark something as IO a, we communicate that its result may change each time we bind its result within a do block.

That is obvious in the case of prompt since the user may enter a different string each time:

main = do str1

Enter a string:
String 1
String 1
Enter a string:
String 2
String 2

However, notice that we still use the equal sign when defining prompt:

prompt :: IO String
prompt = do putStrLn "Enter a string:"
getLine

prompt is truly equal to that executable action, which seems strange. How does that reconcile with what I just said?

It turns out that Haskell distinguishes between an executable program and the program's return value, a distinction that almost no other language makes. A value of type IO a is an executable program that retrieves an a, possibly with side effects, but the program itself is not the same as its return value. So if getLine has type IO String, that means that getLine is an executable program that knows how to retrieve a String. It does not mean that getLine is a String

When I say prompt equals the given do block, I mean that they are equal in the sense that they are the same program. Since they are equal I can freely substitute one for the other. For example, I can go back and substitute the do block in place of prompt and trust that this substitution does not change anything:

main = do str1
This means that do notation does not actually do any computation. All it does is combine program descriptions into larger program descriptions. The compiler then translates the main program description into an executable.

This might seem like a trivial distinction at first until you realize that this allows you to separate your program into two parts:

The part you reason about at compile-time (i.e. non-IO code)

The part you reason about at run-time (i.e. IO code)

The more you factor your program into non-IO code, the more of your program that you can prove is correct at compile time, which means that no run-time circumstances will ever break it.

With traditional programming languages, all code only exists at run-time (i.e. it is all IO code by default), so you can never be entirely sure that it is truly correct. No matter how many tests you write, all you can prove to yourself is that a certain run-time snapshot of your program happens to be in a correct state.

Associativity

Haskell guarantees the following property of all IO programs:

-- Given:
m :: IO a
f :: a -> IO b
g :: b -> IO c
do b
In other words, the program's behavior does not change no matter how you group IO actions. They will still always execute in the same order and preserve the same flow of information.

This guarantees that we can always safely refactor arbitrary code blocks without worrying that we will get weird behavior as a result of creating or removing blocks. Using the above example, I could choose to factor out the first two actions:

block12 = do a
... or I can factor out the last two actions:

block23 a = do b
I can similarly trust that "unfactoring" things is safe and won't cause weird behavior. In other words, I can inline the two refactored blocks safely and trust that it produces the same program:

block123 = do a
Notice that all these programs are all truly equal, which is why we use the equal sign when we say they are the same. This is an example of how we reason about correctness at compile time. We can't reason about the specific values of a, b, and c because they do not exist until run-time, but the program descriptions themselves exist at compile time, so we can statically reason about their correctness and prove that certain transformations are correct.

Identity

What if we want to create a "trivial" program that just returns a value without any side effects? That's what the return function does:

return :: a -> IO a

return takes a value and then defines a trivial program that has no side effects and just returns the given value.

Note that it does not behave like the traditional return command in imperative languages. It does not escape from the surrounding block because Haskell do notation has no concept of a "surrounding block". If it did, all the transformations I gave in the last section would not be correct. More generally, return does not affect control flow in any way. All it does is create an "empty" program that yields a value. That's it.

We call it return because we commonly use it as the last statement in a do block to combine the results of several previous IO actions:

twoStrings :: IO (String, String)
twoStrings = do str1
Other than that, it bears no resemblance to traditional returns.

What equalities might we expect return to satisfy? Well, we expect that:

do x
In other words, there is no need to "re-return" the last action's value, because do notation already does that.

We also expect that:

do y
This just formalizes what I already said: return is an empty program that just binds the same value that you gave it.

Ordering

Now let's define two programs that retrieve input from the outside world:

getLine :: IO String -- Provided by the Prelude
getInt :: IO Int
getInt = do str
... and then a function designed to use both of those values:

example.hs:12:10:
Couldn't match expected type `String' with actual type `IO String'
In the first argument of `f', namely `getLine'
In the expression: f getLine getInt
In an equation for `main': main = f getLine getInt

Imperative programmers make this mistake because imperative languages let them write:

f(getLine(), getInt())

... and the imperative compiler doesn't mind that you passed an impure function as an argument because imperative languages don't distinguish between the side effects and their result.

The imperative approach may seem acceptable until you ask yourself the question: "Which function will prompt the user first: getLine() or getInt()? Typically an imperative language will evaluate the arguments in order, first evaluating getLine followed by getInt. However, this evaluation order is entirely arbitrary and requires extending the language definition.

In fact, most imperative programmers would probably consider the above imperative example "bad form". We'd rather make the ordering more explicit by writing:

str = getLine(); // First prompt the user for a string
n = getInt(); // Now prompt the user for an int
f(str, n); // Now use both values

Haskell differs from imperative languages by forcing you to explicitly specify the order of all computations with side effects, so that there is never any ambiguity. Haskell forces us to write:

main = do str
In other words, Haskell requires imperative best practices by forcing the programmer to order all side effects and not rely on any implicit ordering by the language. This improves the readability of Haskell code because you can always reason about the order of side effects: a do block guarantees that side effects always order from top to bottom.

Miscellany

Haskell prides itself on being an orthogonal language. Almost all concepts are just syntactic sugar on just a few core concepts. This section just describes a few last syntactic niceties that Haskell provides that build entirely on top of the previous concepts.

First off, when we sequence two commands:

do command1
command2

... that is just equivalent to discarding the result of the first action:

Finally, you can define pure computations within a do block using let:

do let x = y
f x
= do f y

For example:

main = do
str

$ ./example
Orange<Enter>
You entered: Orange

Conclusions

Actually, all of that is just the tip of the iceberg because do notation is also just syntactic sugar! If you want to learn more about that, then I recommend you read You Could Have Invented Monads, which explains how do notation itself is built on top of an even more primitive core concept. The more you learn Haskell, the more you realize how it is just a bunch of syntactic sugar on a very small core of orthogonal concepts.

Monday, January 14, 2013

I promised in pipes-3.0 that I would release a resource management library for pipes and now I'm delivering on that promise. pipes-safe extends pipes with resource management and exception handling. Here are the big highlights that I want to emphasize:

You can now lazily manage resources and conserve handles

The library also adds exception handling, meaning that you can catch and resume from any exception within a pipeline

pipes-safe interoperates cleanly with existing unmanaged pipes code.

The central code is reasonably simple and many people should be able to read the source and reason about its safety

Lazy Initializatiom

Now you can package up allocation information with streaming resources, which simplifies their presentation. You don't have to say "run these allocation routines before the session to expose this resource, now stream from that resource, and then run these close routines afterwards".

This means that you can now just concatenate multiple resources and trust that they only open in response to demand and only one resource is open at any given time.

Prompt Finalization

There was one issue with finalization, which is that in order to guarantee safety I cannot always guarantee prompt finalization when composition terminates. I can only safely run dropped finalizers at the end of the Session. However, the library lets you trade safety for prompt finalization when you can prove that the prompter finalization safe.

In practice, this will not be an issue for most users since the dominant use case is a session that is just one linear chain like this:

session = p1 >-> p2 >-> ... >-> pn

In that case, the end of composition coincides with the end of the Session, so there is no delay in finalization. You only need concern yourself with it if you try to do fancier things, and the documentation explains how to safely use the prompt finalization primitives in those cases.

In the documentation for the prompt finalization primitives I outline the "pathological" case that foils all attempts to safely finalize things promptly. I'll repeat it here since it is very illuminating:

p1 >-> ((p2 >-> p3) >=> p4)

When p3 finalizes, we might naively expect that it finalizes p2 promptly, but not p1. After all, if we finalized p1, we might accidentally access the finalized resource if p4 were to request more input.

However, this intuition leads to a contradiction when we carefully select p2 to be idT and p4 to be return:

p1 >-> ((idT >-> p3) >=> return)
= p1 >-> p3

In this scenario, if we don't finalize p1 when p3 terminates, then we are not being prompt! You don't even necessarily have to use idT. Setting p4 to return suffices to trigger the problem, thanks to associativity:

Associativity guarantees that we can combine the two upstream pipes and treat them like a black box. Again, if p3 terminates, we would have to finalize p12 which contains p1. This contradicts our assumption that we could not finalize p1.

The old Frames implementation used indexed monads to avoid this problem because the result of composition had to end in a closed state. Therefore, when (p2 >-> p3) would terminate, it would end in the closed state and would consequently forbid p4 from requesting more input, thus guaranteeing that you could safely finalize p1 promptly.

This example demonstrates something that I had difficulty articulating up until recently: There is no meaningful way to distinguish between pipes that are "directly" composed (like p2 and p3) and "indirectly" composed (like p1 and p3). This foils any attempt to finalize things both promptly and safely.

Note that the second example applies to conduit, too, and I suspect that conduit has the same latent problem and cannot guarantee both prompt finalization and associativity. When I have more time I will dig back in to conduit's source and see if my intuition is correct.

Update and clarification: pipe-safe DOES promptly finalize if any bracketed block terminates normally or receives an exception. The finalizer is only delayed if another pipe composed with it terminates before the bracketed block completes.

Native exception handling

pipes-safe improves on conduit in one important way: You can catch and resume from exceptions in pipes code so that you can continue streaming where you left off. pipes-safe builds on the EitherP proxy transformer to integrate exception handling natively within proxies.

In fact, EitherP gave me the strongest motivation to complete this library. I felt that it would be a really big shame to be the only streaming library with an elegant error-handling framework but then not use it to handle exceptions.

Backwards Compatibility

Another way that pipes-safe improves on conduit is that the resource management system does not require any integration with the core Proxy type or with the standard libraries. It is a well-behaved member of the pipes ecosystem that requires no buy-in from other pipes libraries in order to interoperate with them.

I provide the try function, which upgrades "unmanaged" proxies to "managed" proxies. try is a "proxy morphism", meaning that the corresponding functor preserves all five of these categories:

The Kleisli category

The pull-based proxy composition category

The push-based proxy composition category

The "request" category

The "respond" category

This solution is a perfect example of practical category theory, specifically the functor design pattern. I don't have to require a rewrite of every existing proxy to take into account resource management. I instead just define a functor that automatically promotes unmanaged proxies to managed ones as if they had been written from the ground up with resource management in mind.

Code that doesn't need resource management just proceeds as before, blissfully unaware that there is such a thing as a pipes-safe library or exceptions or resource management. If it ever needs to be used in a safe context, try automatically promotes it to behave correctly, avoiding unnecessary code duplication.

My big objective when designing this library was that pipes-safe would require zero buy-in from the community and from the standard libraries. Fortunately, that's precisely the problem that functors solve by providing well-behaved compatibility layers. In this case, the try function provides that compatibility layer.

Simple Implementation

pipes-safe is very simple and has a clear implementation. In fact, I encourage you to read the source yourself if you want to reason about the safety of the library. The only non-trivial function is the internal registerK function, which serves a similar purpose to the resourcet library.

registerK saves pending finalizers from other proxies so they don't get lost if composition drops them. Unlike resourcet it uses an elegant zipper-like behavior to keep track of finalizers rather than a Map that requires globally unique IDs. This also means that it has perfect time complexity, being just O(1) for all finalization operations. In fact, you could actually implement it using just StateT in the base monad if it were not for exceptions. However, I had to use IORefs in order to ensure that the finalizer state survived exceptions so it is similar to resourcet in that regard.

pipes-safe does not use monad-control and it doesn't use any ad-hoc or unprincipled type classes. Instead it just reuses the Proxy class and the EitherP proxy transformer to do everything so that you don't have to learn any new concepts to understand how it works.

Conclusion

With pipes-safe complete, my next major targets are:

Native parsing for proxies with optional backtracking

Bytestring support

I actually already have the bytestring library up on GitHub, but I haven't released it yet. The reason is that I've been doing a lot of work recently on distinguishing between pipes as a bytestring (or builder) transport layer and as an ordinary session layer. The former is quite challenging to implement correctly, but it will be ultimately rewarding because it will allow people to control the properties of the stream without affecting the payload, and also allow people to stream irregular payloads instead of just list-like things.

I will elaborate more on this in a later post, but the point is that the direction of that work affects what proxies I include in the bytestring library and what proxies will go in a separate transport layer library and that's why I haven't published it yet.