On a sunny day in September 2015 a group of functional programming enthusiasts assembled in Basel, Switzerland, for the first Frege Day.

Frege is a Haskell for the JVM and brings the purely functional programming paradigm to the Java platform.

It was so nice to finally meet in person all those who we otherwise only know from mail, twitter, and other internet channels. There is nothing like the energy that comes from dedicated people coming together.

But the gathering was not limited to only the Canoo office Basel. It also virtually included Frege friends in New Zealand, Australia, and the US, which extended their usual working hours to very late in the evening or very early hours in the morning. This made it a truly international conference!

This has become the major topic of the day: how to evolve the language, the libraries, the tools, and the surrounding material to make pure functional programming on the JVM a fruitful and enjoyable experience. You can read more about the details here: https://github.com/Frege/frege/wiki/Protocol-for-Frege-Day-2015

The day was a major boost for the emerging community and there is lots of potential for even higher popularity.

Dierk Koenig (JavaOne Rock Star) works as a fellow for Canoo Engineering AG. He is a committer to many open-source projects including OpenDolphin, Frege, Grails and Groovy. In the interview with JAXenter he talks about building user interfaces with projector patterns with HTML5 and JavaFX, Groovy and the W-JAX 2015.

After having written episodes 1, 2, and 3, I planned to close the series with some parting thoughts about what we have achieved and where I struggled along the way. But in the process of writing down my findings, it occurred to me that we have achieved an unusual state that we can leverage to our advantage:

All our code is purely functional

1) There is not a single assignment in the code, no state that needs to get updated, and therefore also no shared mutable state that could impose any issues when going parallel.

When recognizing a = sign in the code the casual reader may be misinterpret this as an assignment, which it is not. It is a one-time definition that never changes. All is safe.

2) There are no side effects in the code.

3) You cannot imagine my surprise when I recognized that we don’t even have to consider 1) and 2) for the total of our code because the Frege type system validates our assumptions about purity automatically!

One often advertised advantage of functional programming is it’s improved support for parallel execution when compared to imperative solutions. And that claim is true – to some degree. The point is that “a little bit of functional style” only helps very little when going parallel. As long as you cannot guarantee that all your code that you are going to execute in parallel is MT safe, the ability of calling parallel versions of higher-order functions like map, filter, reduce, etc. is only of superficial syntactial help. With that approach, it remains the obligation of the programmer to inspect the whole call chain from all methods that are called from a inside concurrent operation (like map) down to their very last primitive statement to validate them for MT safety. And even if all the code is safe at the time of introducing the concurrent use, how about later changes where the maintainer is not aware that he has to keep that characteristic?

Tic Tac Toe in parallel

This is where the big difference between
“functional style” and
“pure functional” programming
becomes not only undeniably apparent but also of utmost practical relevance.

A new requirement comes up: go parallel

Let’s try to evaluate our Tic Tac Toe moves in parallel. There is one line where we map all boards that can arise from making a valid move to their evaluation, where probe is a function that is used for both, using the minimax algorithm for finding the best board and for returning the forecast:

Haskell

1

evaluatedMoves=map(probelookahead)possibleMoves

In order to make the mapping parallel, we can use the mapP function from the frege.lib.ForkJoin module:

Haskell

1

evaluatedMoves=mapP(probelookahead)possibleMoves

The superficial syntactical advantage is the opportunity to make the code parallel with one (!) additional character in the code.

While this is kind of cool in itself, the really impressive effect is that this change is guaranteed to be safe! This is fundamental and it works because of the way that purity is handled through the type system.

And this comes so: the type signature of map is

map :: ( α -> β ) -> [α] -> [β]

Let’s start with the first argument ( α -> β ).

It means that map‘s first argument is a function from any unconstrained type α to any unconstrained type β. Let’s call that function f. The map function applies its argument function f to all elements of a list of αs to produce a list of βs.
In our case f is (probe lookahead).
If (probe lookahead) would use any kind of state or use any kind of side effect, then type α or type β (or both types) would need to have a type constraint like State α or IO β .
In such a case, map (probe lookahead) would simply fail to compile.

In other words: the signature of map requires purity of f and the Frege type system enforces this down the wholecall chain! We do not have to go back and check ourselves nor can any later intrusive change by a maintainer undermine our foundation.

Now, what happens when we go from map to mapP ? Here is the type signature of mapP :

mapP :: ( α -> β ) -> [α] -> [β]

That is exactly the same! The whole line of argumentation from above persists!

Let that sink in for a minute.

With Frege, we can again apply a non-intrusive increment: from sequential to parallel execution in an absolutely safe manner. Purity and its handling in the type system make that possible.
In all other popular JVM languages this would have been an immensly intrusive change (even if old code was not edited at all – changing its mode of operation is intrusive) with a huge potential to introduce hard-to-detect errors.
And again, we were able to apply this non-intrusive increment without having planned for it. We just did what is usual in Frege and it turned out to just fit.

We can almost always safely go from map to mapP.

There are two considerations, that one should apply before blindly making all calls to map parallel:1) Granularity
We applied parallel execution on the top-most layer of the evaluation only, not down to all splits of possible boards. That is a usual performance concern for data-parallel concurrency where we weigh the relative overhead of thread scheduling against the size of the unit of work.2) Infinite data structures
Parallel execution makes usually no sense to be done lazily since new threads should start working early enough to have the result ready when requested. But that approach would never finish when applied to infinite data structures on the top-most level of expressions. Therefore, mapP enforces the weak-head-normal-form (WHNF) of the given list. That still allows laziness at lower levels.

But those considerations are fully inside our mental context when applying our increment. We are in the position to decide about granularity and we know that possibleMoves is a finite list. It is all on one line and we do not have to go further.

My personal experience was very strange when I applied concurreny to the tic tac toe game. I went from map to mapP, let the code run, and enjoyed the speedup. Then I fell back into my old habit of paranoid concurrency thinking: what could possibly go wrong?

And for the first time ever in such a situation, I couldn’t think of any possible issue.

In episode two we have seen how designing with functions naturally leads to incremental development. We will now extend our running example with higher-order functions and data type evolution.

Episode Three: higher-order functions and data type evolution

In episode one we made the case for incremental development and the importance of staying non-intrusive. What we will do now is changing code from the last increment.

a stack of increments

Changing code is what we declared as being intrusive but changing the last increment is allowed.

The problem with changing code is that any code that depends on it may be compromised. That is the intrusion. Since no code can possibly depend on the last increment, changing it is safe.

With that consideration out of the way, let’s look at our last increment that finds the minimax value for a given board:

Haskell

1

maxValue=maximize.static.prune5.gameTree

It works fine but we would like to make it even more idiomatic in the functional sense.

We have already seen that functional programmers like to introduce abstractions very early and there is a hidden abstraction in the code above. We map a tree of boards to a tree of doubles. The structure of the tree remains untouched. This is the nature of an abstraction called Functor. Functors support a general function fmap that keeps the current structure but applies a function f to all elements. It turns out that our Tree type is such a Functor.

To use the Functor nature of trees, we use fmap f instead of static.

Haskell

1

2

3

evaluateByf=maximize.fmapf.prune5.gameTree

maxValue=evaluateBystatic

Note that evaluateBy now takes a parameter f, which can be any function that takes a Board and returns an Ord as required by maximize.

A function like evaluateBy that takes another function as a parameter is called a higher-order function. They are heavily used in functional programming and in fact we have used them before without giving it any attention. The current case, though, is different. We use this very common functional feature to set the stage for non-intrusive increments.

Computer knows he will win

A new requirement comes up: predict the future

It appears that playing the game is fun but sometimes it is difficult to understand why the computer does a certain move. It would be interesting if we had a bit of insight and could show what the computer is considering. The idea is that we display a forecast: the final board that the computer thinks the game will develop into.

To this end, we need to capture the board that led to the minimax value. It turns out that this is easier than expected. We can use evaluateBy but we do not pass the static function any more. Instead, we pass a function that maps a board not only to its static value but to a pair of the static value and the board itself.

Haskell

1

2

endValue=evaluateBycapturewhere

captureboard=(staticboard,board)

This is nice. First and foremost it is a fully non-intrusive increment! We did not change any existing code at all.

Second, it falls in place very naturally when following the functional style with higher-order functions – without any anticipation of new requirements.

Think what would have happened in a typical object-oriented style. We can of course achieve similar effects in OO but this requires some forethought to prepare for new use cases: template methods, strategy pattern, and the likes. Without such anticipation, OO design would lead to intrusive changes. One way or the other: incremental development is harder.

The downside is: it doesn’t compile.

Our mapping function must return an Ord but a pair (any tuple actually) is only of type Ord if all its elements are of type Ord as well. The Double value is but the Board type is not.

Hm, what next? Going back to the definition of Board and changing its definition? That would be an intrusive change.

Here we see a big benefit of Frege typeclasses (as opposed to let’s say Java interfaces): we can evolve the data type!Board becomes an instance of the Ord typeclass non-intrusively in a new increment with full static type safety! There is not much logic to add since we make all boards equal. For the purpose of ordering our pairs we only need the staticDouble values of the pair.

Haskell

1

instanceOrdBoardwhere(<=>)ab=EQ

This code looks so simple but the implications are huge. By making Board an instance of Ord, we have given the Board type a new capability!

Note that this even works when the source code of Board is not available. It can come from a third-party library that we only have in binary form. We can even evolveforeign data types.

Now we have everything compiling and working. You can play the game and see the forecast working.

In the next episode, I there is a big surprise waiting with regard to parallel execution.