The Swiss army knife of reactive systems on the JVM

Akka is in many ways a very innovative and yet robust technology. Now that it has become generally accepted that microservice-oriented architectures are a good idea, Akka will play an even more important role in the reactive system at large and already serves as a foundation for other reactive technologies such as the Play Framework and Lagom.

Akka is one of the most mature technologies available on the JVM for building reactive applications. It builds on top of the actor-based concurrency model and is inspired by Erlang (in fact, the initial name of the project was Scala OTP). The influence of the Akka project on the reactive ecosystem is rather significant: Jonas Bonér (Akka’s creator) and Roland Kuhn (Akka’s former project lead at Lightbend) are the co-authors of the Reactive Manifesto. The Reactive Streams initiative has been significantly supported by the Akka team.

Actors as the unit of computation

Akka applications are comprised of actors that are arranged in a hierarchy and send messages to each other. An actor’s behavior describes what happens when an actor receives a message: it can alter its state, create a new child actor, forward the message to another actor, reply to the sender or ignore it altogether. It is through this set of interactions between actors that advanced concurrent applications can be written without the headaches brought in by thread-based concurrency such as unwanted race conditions, data races, deadlocks and livelocks which are very hard to reason about even for seasoned developers. Akka explicitly takes these pains away by providing an abstraction at a higher level — but not high enough to hide the fact that the application is concurrent, a trend seen in many application servers and frameworks and which has turned out to be hurtful rather than helpful.

The actor hierarchy is the secret ingredient that makes Akka applications resilient to failure: each parent actor supervises their children, being responsible for what happens in case of crash. The supervisor decides what happens to the failing child actor by specifying a supervisor strategy. Depending on the type of exception, the failing actor (and if necessary, all of its siblings) is then either resumed, restarted or stopped — and in some cases, the failure is escalated higher in the hierarchy until someone knows what to do. This separation of failure handling logic and business logic is a key concept in Akka’s design: failure is embraced and treated as a first-class citizen, rather than an afterthought.

Designed from the ground up for distributed systems

Akka actors are designed in such a way that the physical location of an actor should not influence how it can be interacted with. This concept, known as location transparency, is of paramount importance when it comes to building reactive applications which need to be distributed in order to be resilient through redundancy and capable to scale out when the load demands it. Paradoxically then, distribution itself is a necessary evil which lies at the root of many failures since networks are shaky constructs that fail all the time.

Akka promotes a few simple principles to embrace network-induced failure: distributed systems are prone to failure due to their very nature and Akka does not try to hide this behavior. In fact, the Akka team’s mantra is “No Magic”, always making it explicit to its users which guarantees Akka can give and what they need to be aware of. First of all, messages should be sent in the asynchronous fire-and-forget mode in which the sender does not explicitly wait for an answer but later reacts to an answer, using correlation identifiers when necessary. Second, first-class failure handling is provided through the previously described actor supervision. Third, Akka is explicit as to what guarantees it can give in terms of message delivery. It provides best mechanism- to collect locally undelivered messages, the so-called “dead letters”, providing a means to inspect why certain messages do not make it to their sender, at least locally. However, this mechanism does not work across network boundaries where the use of acknowledgements is required to guarantee at-least-once delivery semantics.

In order to build distributed applications, Akka offers some very useful extensions.

Akka persistence, Akka cluster and Akka HTTP

Akka Persistence allows actors to recover their state after a crash. Persistent actors have a journal that allow them to replay events after a crash; they can also make use of snapshots to speed up the recovery. Journal entries and snapshots can be stored in a variety of backends such as Cassandra, Kafka, Redis, Couchbase and many more.

Akka Cluster lets an actor system run on several nodes and handles basic concerns such as node lifecycle and location-independent message routing. In combination with Akka Persistance, it provides at-least-once delivery semantics for messages sent across the wire. It uses a lightweight gossip protocol for detecting when nodes are failing. Lightbend’s commercial oﬀering also adds the Split Brain Resolver (SBR) extension that allows to handle correctly in the face of network partitions, where it may not be trivial to decide which nodes should be removed and which ones should survive.

One level higher and an interface to the world: Akka Streams

After several years of building and maintaining actor-based systems with Akka, it became clear that actors may sometimes still be too low-level of a concept, especially when it comes to describing advanced scenarios including control flow and failure handling. Akka Streams represent an answer to this insofar as they allow to describe sophisticated flow manipulation “machines” through a rich set of combinators. Rather than limiting itself to the Akka universe, Akka streams implement the Reactive Streams API for non-blocking asynchronous stream manipulation on the JVM. Thanks to this, applications built with Akka Streams can seamlessly interoperate with other technologies implementing the Reactive Streams API.

A drive for quality and innovation

Akka is in many ways a very innovative and yet robust technology. Now that it has become generally accepted that microservice-oriented architectures are a good idea — better, at least, than the vast, monolithic and unmaintainable enterprise systems that so many companies are stuck with (and because of), Akka will play an even more important role in the reactive system at large and already serves as a foundation for other reactive technologies such as the Play Framework and Lagom.

In my opinion, it is this dedication to “doing the right thing” that makes Akka such an exciting project and great technology to work with. The Akka team does not fear to experiment first with APIs and implementations (Akka Streams, for example, has seen as many as six complete rewrites over the source of three years before it was deemed good enough). This is also why, when working with Akka, you should always be mindful of extensions tagged as experimental in the documentation: there is a real chance that the APIs will change significantly over time, which is not necessarily a bad thing in itself but something to be aware of nonetheless.

Last but not least, Akka has a very active community and an excellent documentation — so good, in fact, that it is rather difficult to do better when writing a book about it. I can only recommend downloading the PDF and reading the documentation as a whole when getting started with the project to get a sense of what pieces are already provided by the toolkit and which concepts to be aware of. Happy hAkking!

To read more about reactive programming, download the latest issue of JAX Magazine:

Reactive programming means different things to different people and we are not trying to reinvent the wheel or define this concept. Instead, we are allowing our authors to prove how Scala, Lagom, Spark, Akka and Play coexist and work together to create a reactive universe.

If the definition “stream of events” does not satisfy your thirst for knowledge, get ready to find out what reactive programming means to our experts in Scala, Lagom, Spark, Akka and Play. Plus, we talked to Scala creator Martin Odersky about the impending Scala 2.12, the current state of this programming language and the technical innovations that await us.

Thirsty for more? Open the magazine and see what we have prepared for you.

Manuel Bernhardt is a passionate engineer, author, speaker and consultant who has a keen interest in the science of building and operating networked applications that run smoothly despite their distributed nature. Since 2008, he has guided and trained enterprise teams on the transformation to distributed computing. In recent years he is focusing primarily on production systems that embrace the reactive application architecture, using Scala, Play Framework and Akka to this end.
Manuel likes to travel and is a frequent speaker at international conferences. He lives in Vienna where he is a co-organizer of the Scala Vienna User Group. Next to thinking, talking about and fiddling with computers he likes to spend time with his family, run, scuba-dive and read. You can find out more about Manuel's recent work at http://manuel.bernhardt.io.