11 June 2014

Don't miss: ddd-leaven-akka-v2 - runnable sample e-commerce application built from Akka-DDD artifacts described in this series.

Let's continue our adventure with Akka and DDD. So far we have been concentrated on write/command side of the system. We have covered topics like command processing, event storage, sharding etc. but haven't touched the other side of the system: the query side. Our goal of this lesson is to implement necessary infrastructure supporting creation of components responsible for feeding query-side database: projections. For query side we will choose sql database although there are other technologies worth considering (document or graph databases might be a better choice depending on requirements).

Our first task is to check if Akka (akka-persistence in current, still experimental version to be precise) supports concept of projections. And as it turns out, Akka provides View trait that defines an actor capable of receiving events from journal of particular processor (aggregate root - AR) instance. This sounds good but unfortunately the concept of View is insufficient for building projections that typically listen to aggregated stream of events rather then events of particular AR instance. A workaround would be to register a view for a processor that receives events from all ARs of given type, but in such scenario, events would be persisted twice and reliable event delivery (more on that later) would had to be ensured between ARs and aggregating processor.

What we should require from "ideal" event store implementation is a dsl for event streams transformation and aggregation.

But what if we don't want to depend on any particular journal provider? Well, composing streams of events would be possible if journals were exposed through some functional (composable) interface rather than View. And this is one of the goals already established by akka team with introduction of akka-stream module. Akka-stream is implementation of recently announced standard for asynchronous, composable and non-blocking stream processing on JVM: http://www.reactive-streams.org/ that models stream publishers and subscribers (as defined by the specification) as akka actors. It also already provides simple dsl (PersistentFlow) for creating event stream publishers that are backed by Views. As you can see here event streams can be aggregated by merging stream publishers (although aggregation by category is not yet supported). Once the akka-streams is released we should definitely make use of it and see how to implement projections as reactive components (subscribers of event streams). For now we need to find a different solution. The only possibility left is to introduce third-party broker and forward events to topics or queues configured within the broker. As we will see in following section, integrating akka with external broker is much simpler that one could imagine. This approach will require enriching AR with Publisher component (trait) responsible for sending persisted events to configured target actor. This is a perfect opportunity to learn more about message delivery in distributed systems.

Reliable event delivery

Sending events generated by AR to any other actor (being a broker gateway or aggregating processor (already mentioned before)) should be straightforward to implement, right? Yes, it is, if you accept that in case of node crash some messages might be lost (despite the fact that corresponding commands had been acknowledged to the client!). Akka does not guarantee that message will be delivered (this is so called at-most-once delivery semantics) because reliable delivery does not come for free in distributed systems. To obtain at-least-once delivery semantics (message is guaranteed to be delivered, but duplicates may occur) acknowledgment protocol must be used. Sender must store the message before sending and receiver must acknowledge the message once it is received. If acknowledgment is received, sender must mark previously stored message as confirmed. If acknowledgment is not received within configured time period or sender is restarted (for example as the result of crash) sender retries delivery. Because acknowledgment message might be lost as well, sender might resend message already delivered. Akka-persistence provides at-least-once delivery semantics in its core. Journal plugin api defines methods for storing confirmation entries for messaged that have been acknowledged. Acknowledgment and retry mechanism is encapsulated within Channel component that must be used to send events from within processor (AR) to destination actor. Events must be wrapped in Persistent envelop so that receiver can acknowledge the message by simply calling Persistent.acknowledge method.

ReliablePublisher

Thanks to traits composability we can put all the behavior required by reliable AR (sender) to separate component: ReliablePublisher. It extends from abstract EventPublisher that in turns extends from abstract EventHandler. We mix in AggregateRoot with EventHandler making event handling in context of AR configurable as much as possible (useful for testing purposes). Please notice that it is possible to stop redelivery when configured number of retries is exceeded. For that purpose RedeliverFailureListener has been registered on the channel to be notified if redelivery fails. The listener actor throws RedeliveryFailedException exception that results in restart of parent actor (AR) (to make it work supervisorStrategy of listener actor had to be adjusted). Inside preRestart method (that RedeliveryPublisher overrides) we can trigger compensation action and/or mark failing event as deleted (to prevent continuation of redelivery after AR is restarted).

To provide destination for ReliablePublisher its abstract member target must be defined on creation time (AggregateRootActorFactory should take care of this as shown here: ProductPublishingSpec). Finally we should verify if ReliablePublisher works as expected and is really reliable. You can find test here: ReliablePublisherSpec.

Message exchange over queue

Now we will configure infrastructure for events transmission over durable queue. We will use Apache Camel to arrange in-only message exchange between a component writing to the queue (producer) and projection component reading from the queue (consumer). Thanks to akka-camel both components (producer and consumer) can be represented as ... (surprise! ;-) actors. EventMessageConfirmableProducer is the producer that receives event messages (notice that Events are published inside EventMessage envelope) coming from ReliablePublisher (AR) and forwards them to the configured camel endpoint (for example jms queue or topic) (see transformOutgoingMessage method). Once event message is accepted by the queue (and persisted if the queue is durable) producer acknowledges event reception to the publisher (ReliablePublisher) (see routeResponse method). Please notice that we choosed to unwrap EventMessage from ConfirmablePersistent envelope before putting it into the queue (so that consumer does not have to do unwrapping itself). ConfirmablePersistent still needs to be attached to the EventMessage so we convert it to meta attribute.

Finally we can implement projection as Consumer actor, an actor that consumes messages from camel endpoint. Projection actor simply applies provided projection specification (ProjectionSpec) to received event and finalizes message exchange by either sending acknowledgment or failure. To prevent processing of duplicated events concrete implementation of ProjectionSpec must return sequence number of last processed event for given aggregareId (see currentVersion method):

Before pulling all the pieces together we need to register concrete broker as component of Camel. We will use ActiveMQ. Configuration of the ActiveMQ component is straightforward (see ActiveMQMessaging trait) And of course we need a runner class (EmbeddedActiveMQRunner) that starts the broker as embedded service.

Implementing projection

Now we can implement concrete projection specification. Let's model ProductCatalog service inside Sales module (context) that maintains list of products with price. Whenever new Product is added to inventory (within Inventory module/context) it should also be added to product catalog in Sales module. Thus we need projection that listens to InventoryQueue and updates product catalog. We will skip implementation details of ProductCatalog as accessing relational db is rather not very interesting topic (we use Slick for that purpose). InventoryProjection just calls insert method of ProductCatalog providing Product data copied from the ProductAdded event and empty price.

An integration test is available here: EventsPublishingClusterSpec . From first node we send two AddProduct commands. Commands are handled by Product AR within inventory context and generated events are published (reliably) to InventoryQueue that is available from all nodes in the cluster. On the second node we register InvetoryProjection and wait for 1 second (so that published events have time to reach the queue) and then send GetProduct query message to ProductFinder to check if expected product has been added to product catalog.

And surprisingly test succeeds ;)!

Last but not least, I added experimental feature to our application:
If command sender wants to be notified once the view has been updated he can request special delivery receipt!

Good question! You are right, broker does not help when it comes to view side recovery. If we want to execute replay we need to create a new View or restart existing View. If all events need to be replayed (in context of some AggregateRoot type), we need to either create/restart View for each aggregate instance or create/restart single View for aggregating processor. First option is troublesome as it requires fetching identifiers of all registered processors and this is only possible by using projection (that can break...) or by querying journal itself using query language supported by journal provider. Second option (aggregating processor) introduces data duplication on write side (as mentioned in the article) but is worth considering here... (I think broker could still be used in this scenario). Assuming we can shutdown the write side for the duration of view side recovery the recovery operation should be straightforward. Otherwise switching from replay mode to live mode could be problematic when dealing with broker... Good news (the only :) here) is that whether we use broker or Views for feeding projections, we can still use existing ProjectionSpec for defining projection logic.

Thank you for you response to my previous question. I don't think naming is most important but I suggest Manager or OfficeManager or RegionalManager would be a better name than just Office. This would also be inline with the idea that actors should be thought of as people.

At present I am interested in denormalised view of collection AggregateRoots (e.g. Department has many Staff). Do you think it would be best to use still early experimental Akka Streams for this (or stick with message broker)? Particularly, I would like the ability to rebuild view or create new views but I am unsure if this will be possible with streams.

Thank you, I have checked out Axon Framework and if I was still in Java world I would have used it. However, I am trying to move beyond to Scala, some functional programming, and actors so I am happy to learn. I am at start of a project so don't mind learning and growing with new technologies, especially if they have good future. I'm also hoping to learn a lot from your code (hopefully it is good, I can't tell :-).

Thanks again for these wonderful examples of doing DDD using CQRS with Akka. Now that Martin has released a Kafka plugin for Akka Persistence would it be possible to use Kafka for the event store and reliable delivery of events rather than Apache Camel and ActiveMQ? I believe it is possible to specify retention periods on a per topic basis so it could be set to "forever" for the event stores. This may simplify the example (or at least make it more efficient).

Thanks again for these wonderful examples of doing DDD using CQRS with Akka. Now that Martin has released a Kafka plugin for Akka Persistence would it be possible to use Kafka for the event store and reliable delivery of events rather than Apache Camel and ActiveMQ? I believe it is possible to specify retention periods on a per topic basis so it could be set to "forever" for the event stores. This may simplify the example (or at least make it more efficient).

Ashley, great posts - a great synthesis of DDDD / CQRS and Akka that I will be sharing with others. By the way, I tried compiling https://github.com/pawelkaczor/ddd-leaven-akka-v2 locally but was stuck with an unresolved dependency com.geteventstore#akka-persistence-eventstore_2.11;2.0.2-SNAPSHOT!akka-persistence-eventstore_2.11.jar. Any ideas on how I can resolve? Thanks again.

Thanks Simon! akka-persistence-eventstore_2.11;2.0.2-SNAPSHOT is a fork available here: https://github.com/pawelkaczor/EventStore.Akka.Persistence. You need to checkout this project and build it locally. This fork is currently required by akka-ddd (see: https://github.com/pawelkaczor/akka-ddd#eventstore-akka-persistence) Hopes this helps.

Ashley, thanks for a quick response. I did download your forked project https://github.com/pawelkaczor/EventStore.Akka.Persistence and did a sbt releaseLocal. Your akka-ddd project now compiles fine. However, ddd-leaven-akka-v2 still fails with a missing dependency .ivy2/local/com.geteventstore/akka-persistence-eventstore_2.11/2.0.2-SNAPSHOT/jars/akka-persistence-eventstore_2.11-sources.jar.

It seems that "sbt releaseLocal" puts sources in a ".ivy2/local/com.geteventstore/akka-persistence-eventstore_2.11/2.0.2-SNAPSHOT/srcs" directory rather than in the ".ivy2/local/com.geteventstore/akka-persistence-eventstore_2.11/2.0.2-SNAPSHOT/jars". I copied sources into jars directory and project builds fine.

There seems to be a difference between akka-ddd and ddd-leaven-akka-v2 sbt settings but I couldn't spot it easily. Thanks again for your great posts.

>> There seems to be a difference between akka-ddd and ddd-leaven-akka-v2 sbt settings but I couldn't spot it easily.

Probably the cause of the problem you experienced is that sbt is somehow confused by coexistence of my fork version (that is available locally) and master branch version (available globally in maven central). I have never experienced this problem on my environment but it was reported to me in the past.

The symptom is, that sbt looks in the jars folder for the sources: i.e.: -> com.geteventstore/akka-persistence-eventstore_2.11/2.0.2-SNAPSHOT/jars/akka-persistence-eventstore_2.11-sources.jar: No such file or directory

Hi Pawel, couldn't post an issue on your EventStore.Akka.Persistence fork, thats why I am writing here. I just stumbled across #8 of the original project and saw that it is closed. Do you plan to switch to this implementation in the near future?

I always like to say as close as possible to upstream projects, since this is where bugs and security fixes are provided first (most of the time).

But I guess we mis-understood each other. I was referring to an issue opened from yourself (https://github.com/EventStore/EventStore.Akka.Persistence/issues/8). I wander if we could avoid to adopt underlying technologies (EventStore.Akka.Persistence and EventStore.JVM in this case) and re-use a "standard" way. This seems (at least to me) a better way, since there is no more "merging" need for new features and bug fixes.

Thanks for your akka-ddd project and the quick "merge" of the EventStore.JVM stuff.

@triplem, Regarding EventStore.Akka.Persistence, issue #9 (follow-up of issue #8) is about creating sample json serializer for Akka's PersistentRepr so there is nothing to be merged. Reasons for forking original project are described in Readme: https://github.com/pawelkaczor/EventStore.Akka.Persistence. Regarding EventStore.JVM, my fork contains only 2 changes: 1) support of system properties in reference.conf 2) version of Akka changed to 2.4-SNAPSHOT.

A very nice sequence of posts! I'm currently reviewing CQRS/ES solutions in scala and yours seems very good!

I currently only have one question to double check - is the order of messages to a projection guaranteed? (because otherwise a projection will ignore the later received messages as its only checking the incremental version of them)

Hi, thanks for the great posts! Do you plan to continue the series using akka streams? Correct me if I'm wrong, but if I get the idea, we could disregard the camel queues if journal aggregation / queries were available, right?

Awesome! Thanks, Paweł. It seems I need to really get to learn Scala, I find it hard to build something similar in Java :)

I saw that you also built a durable scheduler. I've been thinking about the need of building one myself and I have a question about its role in the solution (sorry if it can be inferred from the code):

- Assumptions: - Saga instances remain dormant or passivated until they receive an event, either via their office or their deadline receptor. - A saga instance is expecting a deadline and no other actor is due to send back an event to it (i.e.: an ACK was lost or the saga just needs to wait for a given moment to continue its work)

Given the chance that the system goes down before the saga message to the scheduling office is processed (hence no scheduled event is persisted), how would that saga recover? Does the saga office go through a journal (most likely a projection) with all the non-finished saga instances to resume them?

Hi Tasio,I highly recommend you to learn Scala :) The role of durable scheduler is (as you mention) to allow Saga schedule events.Saga instance is a persistent actor with at-least-once delivery guarantee so there is no chance for loosing scheduled event.Saga office is a proxy actor provided by Akka Sharding. It knows how to find and activate/resume target actor (Saga instance in this case) using provided Message -> "actorId" function.

I'm hands on with Scala already! ;)Thanks for your answer, there is one thing that maybe is already solved by the actor being persistent, but I don't actually know how does it achieve it... If the saga persistent actor were in the state of "sending request to scheduler", but that request hadn't been sent out yet right when the system crashed, how would that saga actor be resumed?

Good question Tasio!Sending a request to the scheduler is performed by an event handler which is executed also during Saga's recovery (after the crash). Akka's AtLeastOnceDelivery trait takes care of not redelivering already confirmed messages.

Ok, good question again :) The event handler mentioned above is executed before the Saga sends acknowledgement of receiving event to the SagaManager. So in case of a crash the SagaManager will notice an unconfirmed event in its journal and will "resume" the Saga (by sending the event).

Aha, I read about the acknowledgement, but I didn't realized it was used also for event delivery (I thought only for commands), so the SagaManager keeps certain state about the confirmation of those events, great. All the pieces fit now. I'll have a look at the task and continue my Scala learning to see if I can work out some nice documentation :)