This is second part of my article about the Elm programming language.On the first article I focused on the main features that make Elm an interesting alternative to JavaScript.

Here, I’ll focus on the historical perspective that makes Elm worth a look, and will go deeper on the language’s two main features: functional purity and static typing. After that I will point out some good examples about who is using Elm in production.

The low-profile success of Elm’s architecture

The best way to predict the future is to invent it.

Alan Kay, designer of the Smalltalk programming language

Before 2013, most front-end programming followed very imperative approaches.Frameworks like Backbone or Angular v1 inspired themselves on the model-view-controller pattern (MVC) and made the mutation and observation of the view models the core of their display logic. Handling so much state and events was hard and uncomfortable for many web developers, to whom the simple data-flow of their backend share-nothing web-frameworks made them sigh for the old days when JavaScript and AJAX had not yet taken over their world.

The one-way data flow

React was the first project shaking this imperative house of cards. The most distinctive feature of this project is the virtual DOM, which enables programmers to think of their web page as something that can be efficiently re-calculated and re-rendered without big costs.

Such efficiency is what enables React’s way to model the front-end: A webpage consists on a tree of components and each time the user interacts with them the whole page appearance is re-calculated based on the new state stored within these components. You do not have to actively change the DOM anymore; React takes care of that for you.

Despite the fact that React made it easier to model web applications, it did not prescribe a way to separate the business logic from the rendering logic. Facebook did propose an architecture called Flux to solve this problem, though many of the former implementations were more complex than needed and were taking time to gain traction.

A functional architecture for React applications

It was in 2015 that Redux appeared and emerged as a simpler implementation of the Flux architecture. The main idea is that the application’s state should be stored in a single state tree and that changes to this tree should be modeled as a reducer function. This should itself be the result of the composition of several reducer functions, each being responsible for part of the state of the application, modeling the logic to process each action and obtain a new version of the state.

Redux is a simple library with a very small core(around a dozen of small functions). It is easy to grasp, pleasant to use and allows the use of cool development features like time-traveling debuggers and hot module reloading.

Despite the simplicity, its approach is not intuitive and feels alien to JavaScript. Writing reducer functions that do not change the previous version of the state is not trivial and it’s even recommended to use a library to deal with the state in an immutable way. It is thus with no big surprise that Redux author made it very clear that such architecture is heavily inspired in Elm.

Static typing

Courage is knowing what not to fear.

Plato

If Elm’s architecture succeeded after being blended with an alien imperative technology like JavaScript, even at the cost of taking away developers from their comfort zone, how nice can it be if it kept away from the “hermeneutics” of dynamic typing and the unsafety of arbitrary state mutations? Is there more to take from this lode? Are there other secrets under Elm?

The JavaScript ecosystem seems to need static typing

When a software project starts, all the code is beautiful, pink and shiny . Then, you start to face deadlines, bugs, hotfixes, scope changes, company merges, acquisitions and a horde of over-allocated developers changing the code while the original author is on vacation, sick or dead. After aging, every line of code carries the height of its history and often nobody is there to tell.

On badly aged projects, variable names lie, function names lie, class names lie and type names lie.Though, when using dynamically typed languages – like JavaScript – only names and tests provide us with information about the semantics of a program and about how it can be changed and improved.

In praise of static typing

Statically typed languages – on the other hand – check the semantics of the project and assure its correctness. The names might be wrong, but we know which operations match which types and the compiler automatically checks if our changes make sense.

The information provided to the developers by such systems is worth a thousand names and those systems are easier to change and require much less testing because the type system avoids several errors that would otherwise have to be checked for by an automated test developed by a human.

The downside of static-typing

The problem with types is that – in most languages – they get in the way when developing new code and debugging or experimenting how the code behaves. Usually, types add a lot of desnecessary noise to the clear speech that developers have with the machine other developers while coding.

With types, developers are also forced to write more – and while that might not be very bad when using the editor’s auto-complete – it is a catastrophe when experimenting in interactive environments like the command-line or the debugger. The bulkiness of having the code full of type information creates plenty of inertia and implies a huge loss of agility. Choosing between static and dynamic typing thus ends up as a tradeoff in between Usability and Maintainability .

The JavaScript ecosystem is moving towards gradual typing

It is thus with no surprise that we find several initiatives bringing static typing into the JavaScript world. The most notorious ones are Flow (Facebook), Typescript (Microsoft), and Dart (Google). While these are not changes to the JavaScript language itself, they are either supersets of the language that add typing or similar languages that directly compete with it. This is building plenty of pressure on developers to move to a statically typed approach because the tools and libraries are also moving that way.

However, there is a common trend amongst all these approaches: In order to avoid the static typed bulkiness they have chosen to make types optional . Thus – depending on the context – the developers are free to either add type information to their code or to choose working without type checking – and the type safety that comes with it.

The results are better than without types, but still relatively limited. As types are gradual, it is often impossible to check if the type signatures make sense, and the amount of verification depends heavily on the developer’s discipline and on whether the libraries used have type information for the same type system.

Elm has a different approach

Elm’s ideas come from the typed functional world, in which decades of development resulted in the use of type inference. Unlike with dynamic or gradual typing, the compiler checks the types on the code. Though, unlike most static type systems, the type information is not required to be explicitly entered by the programmer and is inferred by the compiler.

The programmer may choose to later add type signatures to his functions, though these type signatures go separate from the function definitions. This way, they do not become confusing to whoever later reads or edits the code.

However, It should be noticed that, while inspired in functional languages type systems, Elm does not adhere to their most complex features. It adopts what seems useful to front-end programmers and keeps a pragmatic approach. It’s as if JavaScript was redesigned for its current usage.

Pure functional programming

True wisdom comes to each of us when we realize how little we understand about […] the world around us.

Socrates

Elm is a pure functional programming language. This means that all its expressions enjoy a simple property called Referential Transparency from which stem several interesting consequences. We proceed by describing what is Referential Transparency and then explore its consequences.

Referential transparency

An expression or function is said to be referential transparent when its results behave in a logical manner. In other words, when provided with a certain input, a referential transparent expression will always evaluate to the same result.

Pure functions are easy to reason about and to test

If we think of the mathematical concept of function we may notice that it indeed enjoys this property. It is actually quite hard to imagine how could it be otherwise. The function that calculates the area of a square always evaluates to 4 square meters when provided with a 2-meters input and will always behave as such. There is no way to write a function that will take the side of the square as single input and have it return different values at different moments in time.

At least in the mathematics that most of us have been thought, mathematicians seem to have chosen such abstraction as a very important – if not the main – tool to model and reason about the world. I would say this was no accident.

If a function always returns the same results, we can evaluate and test it without having anything in account except for its inputs and outputs.We know exactly what to provide and what to expect when thinking about modeling and testing it.

No mutation

But referential transparency is not a reality in most programing languages. With the development of digital computers, most programming languages chose to model the world in a way akin to the way the machines internally work rather than to the one mathematicians used to model the world.

There were attempts to follow the first path since the development of the first compilers. However, the computers at that time were not powerful enough to handle such computational model and the machine-like way to model the world became an ingrained part of our computer-science culture.

That machine-like way to model the world (imperative programming) is mostly based on a mutating state. In it, computations can be modeled by reading and changing the state of a memory unit until a final state is computed. When adding functions to this model, most languages chose to share this state amongst functions thus breaking referential transparency.

Functions do not depend on their input; their outcomes depend on the memory that they read and write, and they often do not return any value, solely serving the purpose of changing the values in the shared state.

In contrast, in purely functional programming, no state is shared amongst our functions. And if no state is shared, there ceases to be a reason to change the state of the values that are being used. The aim of our functions stops being about changing values and starts using them as inputs to calculate other values. In purely functional programming, every definition is immutable.

Elm fits on this language family and, thus, all its definitions are immutable and all its functions are referentially transparent.

The runtime is in charge of side-effects

Pure functions alone are useless.We perform computations because we want to read input values from the outer world and because we want to change it according to the values of our computations. The outer world is thus like a state that is altered by our program in the same sense as the memory of our imperative programs.

Some functional programming languages found mathematical ways to deal with this issue elegantly in pure-functional style and in a general purpose way, but by using some concepts that have shown being quite hard to grasp.

Elm has a different approach. Its runtime and architecture hides this problem from us. Our program ends up being a set of functions used to calculate what to display to the user given a certain sequence of user actions or other environmental inputs.

Basically, the Elm runtime and architecture take care of interacting with the world in this particular domain, making it in a pleasant and simple way.

Real-world Elm?

The future is not laid out on a track. It is something that we can decide, and to the extent that we do not violate any known laws of the universe, we can probably make it work the way that we want to.

Alan Kay

Elm’s ecosystem is quite immature and is evolving slowly. However, the library’s quality is often very good and the compiler gives us plenty of guarantees about their stability. For now, Elm is appropriate for small dynamic webpage components or for simple webpages that do not require server-side rendering.

There are some important features only expected to come with the following version (0.19):

Related Languages

Context is worth 80 IQ Points.

Alan Kay

Elm is heavily inspired in Haskell and heavily inspires Purescript.Here goes an outline of what these are all about:

Haskell

Haskell is by far the language that most influenced Elm. It is the current standard for lazy typed programming languages and is developed by a committee of academics from the field. The main differences from Elm are:

It is a general purpose programming language;

It provides a very polymorphic and complex type system;

The evaluation is lazy (expressions are only evaluated when their values are needed);

Usually it is compiled to assembly;

Has runtime errors.

Purescript

Purescript falls somewhere in between Haskell and Elm. Both are evolving together and influencing each other as they grow. It’s main characteristics are:

General purpose programming language;

Compiles to JavaScript;

Like Haskell, supports a very polymorphic and complex type system;

Worse error messages than Elm;

Uses JavaScript’s runtime rather than its own runtime;

Easy (and unsafe) interactions with JavaScript code;

Like Elm (and unlike Haskell), it supports strict evaluation;

Has runtime errors.

Due to its complex type system, I do not think it is feasible for most JavaScript programmers to jump into Purescript. However, its easy (and unsafe) interaction with JavaScript makes it an interesting option for people who might have gone through Elm’s or Haskell’s stepping stones.

There are three projects in this language’s ecosystem that try to address the same problem as the Elm architecture:

Pux – When using Halogen they realized that the elaborate typing made it very hard for beginners, they then implemented Elm’s architecture in Purescript, without deviating much from the simplicity of Elm’s approach. The outcome was a framework called Pux.

Conclusion

Elm is an interesting front-end specific programming environment that supports an immature software ecosystem.Due to the language design this young ecosystem ends up providing atypically strong safety guarantees. Many of these stem from the friendly compiler that guarantees that there are no runtime errors.

It’s very pleasant to work with and allows easy development and refactoring with unprecedented safety and joy. It was designed to fulfill the needs of a modern JavaScript programmer and to be easy to get started with.

For now it’s not ready to develop complex single page web applications due to lacking server-side rendering and some important optimizations. The next release (0.19) is expected to address these issues and while it is not ready it might not be a good idea to use it for more than small applications or components.

It is worth keeping some attention on this project that seems to have the potential of becoming a very competitive tool.