Monthly Archives: April 2016

An ExecutionContext is similar to an Executor: it is free to execute computations in a new thread, in a pooled thread or in the current thread (although executing the computation in the current thread is discouraged)

The Global Execution Context:

ExecutionContext.global is an ExecutionContext backed by a ForkJoinPool. It should be sufficient for most situations but requires some care.
A ForkJoinPool manages a limited amount of threads (the maximum amount of thread being referred to as parallelism level). The number of concurrently blocking computations can exceed the parallelism level only if each blocking call is wrapped inside a blocking call. Otherwise, there is a risk that the thread pool in the global execution context is starved, and no computation can process.

By default, the ExecutionContext.global sets the parallelism level of its underlying fork-join-pool to the amount of available processors (Runtime.availableProcessors). This configuration can be overridden by setting the following VM attributes: scala.concurrent.context.minThreads, scala.concurrent.context.numThreads, scala.concurrent.context.maxThreads.

Thread Pool:

If each incoming request results in a multitude of requests to get another tier of systems, in these systems, thread pools must be managed so that they are balanced according to the ratios of requests in each tier: mismanagement of one thread pool bleeds into another.

They hold the promise for the result of a computation that is not yet complete. They are a simple container- a placeholder. A computation could fail of course, and this must also be encoded. a Future can be in exactly one of 3 states:

pending

failed

completed

With flatMap we can define a Future that is the result of two futures sequenced, the second future computed based on the result of the first one.

Future defines many useful methods:

Use Future.value() and Future.exception() to create pre-satisfied futures

Future.collect(), Future.join() and Future.select() provide combinators that turn many futures into one (i.e. the gather part of a scatter-gather operation)

By default, futures and promises are non-blocking, making use of callbacks instead of typical blocking operations. Scala provides combinators such as flatMap, foreach and filter used to compose futures in a non-blocking method.

Two important concepts we need to understand when we do configuration:

Throughput
It defines the number of messages that are processed in a batch before the thread is returned to the pool.

parallelism factor
The parallelism factor is used to determine thread pool size using the following formula: ceil (available processors * factor). Resulting size is then bounded by the parallelism-min and parallelism-max values.

Too many threads:
Find the minimal concurrency level, or split the service into serval JVMs.

Eden should be big enough to hold more than one times transactions. In this case, there is no stop-the-world and through output would be big.

Each survivor should be big enough to maintain alive objects and aged objects.

Increasing threshold will put long aged objects to old generation asap to release more space to survivor.

Now we use 64-bit JVM, 64-bit pointer will cause CPU buffer is smaller than 32-bit pointer. Involving -XX:+UseCompressedOops will compress 64-bit pointer to 32-bit pointer, but it still will use 64-bit memory space.
Object stored in memory split into 3 parts:

The default implementation of Scheduler used by Akka is based on job buckets which are emptied according to a fixed schedule. The scheduler method returns an instance of akka.actor.Scheduler which is unique per ActorSystem and is used internally for scheduling things to happen at specific points in time.

Scheduler implementation are loaded reflectively at ActorSystemstart-up with the following constructor arguments:

the system’s com.typesafe.config.Config (from system.settings.config)

a akka.ever.LoggingAdapter

a java.util.concurrent.ThreadFactory

You can schedule sending of message to actors and execution of tasks (functions or Runnable). You will get a Cancellable back that you can call cancel on to cancel the execution of the scheduled operation. Cancels this Cancellable and returns true if that was successful. If this cancellable was (concurrently) cancelled already, then this method will return false though isCancelled will return true.

Cancellation flow:

a cancellation signal set by a consumer is propagated to its producer.

the producer uses onCancellation on Promise to listen to this signal and act accordingly.