Thursday, February 17, 2011

One old rule is: more types, less code. This is because the less "sophisticated" type provides less "support" to the code, and therefore you need more code to keep things in control. Another way to put this is that the invariants provided by the use of more types allows the code to be more simple.For example, long ago I experienced this type of code simplification with the use of the record/struct moving to C and Pascal from early basic and FORTRAN (and assembly). And again with classes and objects moving to C++. And again with variant types and later monads with functional programming.

Another rule is: better architecture, less code. I have repeated this mantra for many years and yet thinking about it, I am not even very sure I have a good definition for architecture. I use to say: 'architecture is what does not change in the system'. Now maybe I should say: 'it's what does not change and what is not a type'. This then begs the question: 'If I can capture more of my system in its types, does that make the architecture more simple'?

I am starting to get the feeling that architecture is a little bit like dynamic typing: you use it when you do not know better. I am tempted to try to introduce a new rule: better typing, less architecture". I will see if I can make that rule stick!

Monday, February 07, 2011

Functional programming is about working with pure "low level computational constructions" in order to sustain "higher level computational abstractions" across an algorithm or program.
You may remember from your computer science education that there are three ways a function (lambda expression) can be rewritten (or reduced):

Alpha reduction: renaming variables

Beta reduction: applying functions to their arguments

Eta reduction: moving independent code out a scope's function

Functional languages typically have more features than simple lambda expressions. For example, functions can have multiple arguments, type and type inference is available, and possibly also classes and objects. Yet functional programming is really about "pulling" the low level concepts of the function, especially beta and eta rewrites, upwards into higher and higher abstractions of concepts.

For example, an immediate application of beta reduction is lazyness: a "smart" control of the function application to implement something like a "compute on demand" logic in order to be able to deal with non terminating computational structure. Other flavors of Beta rewriting

Eta reduction is possibly even more important than beta reduction. It is the rule that allows your code to "change its focus" by swiveling the order of the arguments of functions and bringing in new arguments. Working with functions can at first seem horribly constrictive because they have only one input (their arguments) and one output (their result). Worse than that, the order of the arguments is very important because only the last argument(s) can be moved from a left side formulation to a right side formulation; From f x y = to f x = fun y -> ... Nevertheless, eta reduction is saying that f x y can be rewritten as f' y x, or even f '' y x z as long as x and y are still used in the same fashion. One of the first things to learn in functional programming is how to order arguments to make functions fit together. My basic rule is that there are three types of arguments:

Parametric: they are generally constant over the life time of the function

Stateful: 'environmental' parameters that represent some form of state being passed around

Primary or type centric: the main arguments of the function, the ones the function has been written for

The magic of the eta rewrite property is that the parameters can be reordered so that the parametric parameters come first, then the stateful ones and finally the primary ones. Doing so allows the following:

Functions can directly be "specialized" by instantiating their parametric arguments. For example, a function f n x can be be initialized as f100 = f 100. (This is example is meant to show the concept, in reality what is usually done is that f n x is defined and another function is called that calculates n and then proceeds to create fn as f n for a fixed value of n. And typically n is not numeric but often a function!)

Stateful parameters can all be regroup into a single argument and wrapped up into a standardized calling pattern. We have just reinvented the state monad!

The primary parameters being last then can often be omitted in a combinatorial programming style. The simplest of these is the use of function composition: if we have f x and g y, we can compose f and g without "mentioning" the arguments. (In F# f |> g)

Having now defined these three "characters" of arguments, we can go a little deeper:

Parametric arguments can be normalized so as to support modularity. The concept is a little bit like virtual look-up tables under OO: introducing an indirection provides flexibility. An additional pattern I favor is to bootstrap the parametric arguments with fix-point combinators.

As I mentioned: the stateful argument is very much what the state monad is all about. Therefore I favor always using monads to carry around the stateful parameter(s).

The primary parameter can also be normalized. For example, it may be supporting an iteration, a mapping or a join like operation. Another example is that it may have sub-types and combinatorial operations may be defined to work only on specific sub-types.

I'll stop here for today but there are definitely a few other key constructions of the essence of functional programming worth mentioning.

Sunday, February 06, 2011

Crap! I spend this evening re-writing some code from a monad "accessor" concept to a monad adaptor concept. In my terminology:

The accessor is a payload of functions that is given to a monadic function to allow it to be compatible with a globally common monadic state type

The adaptor is a monad to monad wrapper that allows a monad written for one state type to work for another.

Now I find that as the adaptor works top down it will see different type signatures, so it needs to be polymorphic. As it is passed around as an argument, polymorphism would make it a functor. But F# does not allow functors! Very crappy.

I have two choices: revert back to the accessor or... or pass a tuplet or record of adaptors and use "one per payload type". Note good because it beats the purpose which would be to use the adaptor concept in layers. I am tempted to ditch this adaptor concept in F# for the moment.

Here, runstate gets the function out of the monad m and applies it to the state s. This normally results in a success "pair" with the second term being the new state and the first term of the pair being the optional return type of the monad (is unit type when no return).

What adapt does is to adapt a monad to work with a different state, normally a larger state that includes a the state of the monad. Now I can rewrite the following functions:

These functions are used to locally grow the monadic state into a pair where the second arument is the original state and the first is provided as an argument (pairM and pairM') and then to run "local" monads that are defined either for the first or second state. The ' annotation indicates that we are working with functions that return monads and not "raw" state monads.

This brings me back to the my on Local versus global monadic states in large applications, a third solution to integrate a monadic module is to write the module purely with a local state and then to "adapt" it into a larger state with the functions above. The limitations of doing things this way is that is that these modules cannot call other modules as they have no other state to make available than their own.

Finally, you may note those "get and set" arguments for the adapt functions. These are not ideal but I think I need to rewrite my error management to remove my explicit state from the failure type. My gut feeling tell me that would allow a "two way" interation where the parameterizing argument to adapt could itself be a monad.

As you see it is not only monadic but also knows nothing about the recursion of the expression type. That is on purpose so that this map function can be combined with other map functions to transform any use of the parameterized expression type. Anyways, this functions is meant to take in one version of the expression type and produce another one. What does not work is the ability to impose that. Instead, when I see that my code does not compile "somewhere else". I need to figure out that I have made a mistake. For example, yesterday evening I had added the code:

| ExprDef e ->
let! e' = mapER e
return ExprDef e

That last line is missing a single quote and should be ExprDef e'. The compiler will not tell me that now the return expression type is the same as the input expression type. To find the bug I can make the polymorphism explicit, with the function defined as:

Now the compiler will report that 'e1 does not match 'e2. Making the polymorphism explicit would be the correct way to impose this "not equal type" constraint. The problem is that F# has a fragile type inference algorithm and if polymorphic functions are always defined explicitly the inter-module type inference simply fails with impossible errors. Therefore I do not explicitly declare the polymorphism and this hurts.