In this talk Patrick will describe how to easily scale out using Akka Cluster. You will learn what Akka Cluster offers, appropriate use cases, and practical issues they encountered in implementing Omnia, their distributed and reactive data platform that builds on Akka Cluster. Patrick will focus on exploring the different clustering strategy, and illustrate with code examples from Omnia how to create a custom clustering strategy.

Akka Cluster is a framework for coordination in a distributed system. It uses a peer-to-peer approach, where there are no master nodes and machines can join and leave the cluster at any time. This is a very flexible system but does require careful thought about how work is distributed around the cluster---the clustering strategy---to meet the availability and consistency goals of the system.

On a busy Saturday at William Hill they have to cope with a peak of 5 million price updates per minute and track 300,000 active user sessions. The team needed to modernise their data pipeline. Akka Cluster provided an ideal platform to build on, due to its decentralised peer-to-peer design which avoids single point of failure, allowing easy scale-out in response to load.

However, you have to think carefully about how you use Akka Cluster, and primarily the clustering strategy to adopt. The strategy determines how data and load is distributed around the system, and hence determines its ability to function given node failures and under stress conditions. Akka Clustering provides an extendible model, which allowed us to define our own strategy to fulfil our requirements. Patrick will describe the use of Akka Cluster in Omnia, and dive deep into the cluster strategy and other issues the William Hill team encountered in use.

Thanks to our sponsors

Go distributed (and scale out) with Actors and Akka Clustering

Patrick Di Loreto, R&D Engineering Lead at William Hill, is driving the development of the company's next generation Data Platform. Passionate about Functional Programming and Machine Learning, Patrick is an experienced engineer focused on designing and implementing distributed systems for highly available and scalable platforms.

In this talk Patrick will describe how to easily scale out using Akka Cluster. You will learn what Akka Cluster offers, appropriate use cases, and practical issues they encountered in implementing Omnia, their distributed and reactive data platform that builds on Akka Cluster. Patrick will focus on exploring the different clustering strategy, and illustrate with code examples from Omnia how to create a custom clustering strategy.

Akka Cluster is a framework for coordination in a distributed system. It uses a peer-to-peer approach, where there are no master nodes and machines can join and leave the cluster at any time. This is a very flexible system but does require careful thought about how work is distributed around the cluster---the clustering strategy---to meet the availability and consistency goals of the system.

On a busy Saturday at William Hill they have to cope with a peak of 5 million price updates per minute and track 300,000 active user sessions. The team needed to modernise their data pipeline. Akka Cluster provided an ideal platform to build on, due to its decentralised peer-to-peer design which avoids single point of failure, allowing easy scale-out in response to load.

However, you have to think carefully about how you use Akka Cluster, and primarily the clustering strategy to adopt. The strategy determines how data and load is distributed around the system, and hence determines its ability to function given node failures and under stress conditions. Akka Clustering provides an extendible model, which allowed us to define our own strategy to fulfil our requirements. Patrick will describe the use of Akka Cluster in Omnia, and dive deep into the cluster strategy and other issues the William Hill team encountered in use.

Thanks to our sponsors

Go distributed (and scale out) with Actors and Akka Clustering

Patrick Di Loreto, R&D Engineering Lead at William Hill, is driving the development of the company's next generation Data Platform. Passionate about Functional Programming and Machine Learning, Patrick is an experienced engineer focused on designing and implementing distributed systems for highly available and scalable platforms.