Erlang/OTP vs JVM - a quick comparison

There is a high need for a concurrent processing and most probably it will become only higher. That is why it is important to know the tools we have at hand, i.e. programming languages and everything around them, which is not limited to merely semantics, syntax, and idioms (though we still should have a solid knowledge of them).

Tomasz Czermiński

|

19 Jan
2018

Introduction

There is a high need for a concurrent processing and most probably it will become only higher. That is why it is important to know the tools we have at hand, i.e. programming languages and everything around them, which is not limited to merely semantics, syntax, and idioms (though we still should have a solid knowledge of them). We need to go much deeper to the platform on which our code is being executed and to some extend even to the hardware level in order to examine memory, the processor and how they interact with each other. In this post I would like to focus on the memory architecture (heap in particular) of two platforms – JVM and Erlang/OTP. Both of them are mature, battle-tested environments and each one has a different memory model[i].

Garbage collection

Both JVM and Erlang have their heaps split into generations. The assumption here is that most of the newly created objects will die soon upon their creation. As a result the garbage collection (GC) can be performed less often for objects that have already survived a few cycles of GC. The technique used with such an assumption is called Mark-Sweep. There is a finite set of roots in any application (e.g. static variables in Java). If you follow the references from each root in the set you’ll eventually find all living objects in a program. As soon as you have all of these objects marked, the rest can be ‘swept’. The problem is that as the heap size grows larger, the GC takes more time to complete.

Putting aside generational approach there is another technique that is known as a reference counting. It is quite straightforward. One of the possible implementations would be adding an extra bit of information for every object on the heap. Let us say that we have an object A allocated on the heap. This additional bit would indicate whether any other object B has a reference to object A. If there is no reference to object A in any other object on the heap, the memory occupied by the object A could be freed. One would think that in case of Erlang, the reference counting technique should be preferable and sufficient, as in theory there should not be any cyclic dependencies. However, in most cases that one extra bit of information makes the whole process too expensive as the size of each individual object on the heap gets increased.

Shared Heap Architecture (JVM)

This kind of an architecture is the most commonly used today. You can find it in Java or C#. Its name already tells us that in this case heap is common for every thread in a program. Stack, on the other hand, is private for each thread. In theory, every object in this model should be synchronized using locks, however, a language still could provide a thread-local structure. The heap can be pretty large in such a model and unless we use a concurrent garbage collection (GC), the GC processing tends to be time-consuming, which may result in Stop the World application pause. Locking is not an optimal synchronization technique if you want to scale to large number of processors, but it is necessary in this type of an architecture.

In this case there is no way to tell whether list is empty or not after the threads execution is terminated as both of them use the same object. What is interesting is that even if you declare List as final it still can be mutated as it provides methods that can change its internal state.

Private Heap Architecture (BEAM)

Architecture of this kind is used in the Erlang/OTP. Each Erlang process (which is not a system process but a lightweight version of Java’s Thread) has its own heap that is independently garbage collected. There is a global, shared memory space which is used to e.g. store large binary-type objects. That space is maintained using a reference counting technique. With low-cost processes it is common to run hundreds of thousands of them on a single machine. In such a case heaps tend to be small enough to drastically reduce GC time. Besides the ease of scaling, one of the benefits of this memory model is that after termination of a process, its heap can be automatically reclaimed. However, the problem is that the copying garbage collector, which is usually used to deal with heaps, copies each message from the sender’s to the receiver’s process heap[i]. The message passing consists of the following three steps:

1. Calculating the size of the message.2. Copying the message to the receiver’s heap.3. Delivering the message to the receiver’s message queue.

Complexity depends on the size of the message, hence it equals O(n) and therefore it can be costly.

When to use which model?

In the era of the cloud, microservices, distributed computing and caching, I would say using a private heap architecture is generally more desirable though not nearly as common as its alternative. However, each case is different and it remains at the discretion of software architects and development teams. They need to be aware of the pros and cons of each possible tool that can be used to solve a business problem they face. And it is also their responsibility to choose the best technology taking into account all their particular circumstances. Let me mention that there is still a place for monolithic applications in the modern world and in case of such a program I would consider using a platform with the shared heap architecture. No silver bullets here. If none of the presented architectures is good enough in your case, there is one more that I am aware of – a hybrid architecture which is an attempt to create a model even better suited for concurrent processing that private heap architecture. It is not being discussed here as it would make the post even longer than it already is, but you can find it described in one of my sources listed at the end of the post.

Summary

I have barely managed to scratch the subject here. Both JVM and BEAM memory models are not only about heaps and stacks. Memory management is a complicated problem, but I still think that a professional software engineer should be familiar with at least its main concepts and terminology. It is not only about an academic discussion as each one of us engineers, every now and then, faces the problem described in this post. In the same way, each one of us has to face the consequences regarding the technology we choose. Moreover, our companies will have to struggle with all the problems that might occur if we choose wrong. Responsibility is the key factor in our industry and being responsible in this context means having necessary knowledge to solve a problem and using it to our best. I have read the white paper[iii], which solely focuses on the problem discussed in this post, and I strongly encourage you to do the same as it will help you get a grasp of the subject.