Category Archives: CodeProject

Last time we looked at writing unit tests for our code, where we looked at using ScalaTest. This time we will be looking at mocking.

In .NET there are several choices available that I like (and a couple that I don’t), such as :

Moq

FakeItEasy

RhinoMocks (this is one I am not keen on)

I personally am most familiar with Moq, so when I started looking at JVM based mocking frameworks I kind of wanted one that used roughly the same syntax as the ones that I had used in .NET land.

There are several choices available that I think are quite nicely, namely :

ScalaMock

EasyMock

JMock

Mockito

Which all play nicely with ScalaTest (which I am sure you are all very pleased to here).

So with that list what did I decide upon. I personally opted for Mockito, as I liked the syntax the best, that is not to say the others are not fine and dandy, it is just that I personally liked Mockito and it seemed to have good documentation and favorable Google search results, so Mockito it is.

So for the rest of this post I will talk about how to use Mockito to write our mocks. I will be used Mockito along side ScalaTest which we looked at last time.

SBT Requirements

As with most of the previous posts you will need to grab the libraries using SBT. As such your SBT file will need to use the following:

It can be seen above that it is quite easy to mock a trait. You can also see how we stub the mock out using the Mockito functions

when

thenReturn

Return Values

We just saw an example above of how to use the “thenReturn” Mockito function, which is what you would use to setup your return value. If you want a dynamic return value this could quite easily call some other function which deals with creating the return values. Kind of a return value factory method.

Argument Matching

Mockito comes with something that allows you to match against any argument value. It also comes with regex matchers, and allows you to write custom matchers if the ones out of the box don’t quite fit your needs.

Here is an example of writing a mock where we use the standard argument matchers:

Anyway behind the scenes I will be studying more and more stuff about how to get myself to that point. As such I guess it is only natural that I may post some more stuff about Scala in the future.

But for now this it it, this is the end of the line for this brief series of posts on Scala. I hope you have all enjoyed the posts, and if you have please feel free to leave a comment, they are always appreciated.

So last time we looked at how to use Slick to connect to a SQL server database.

This time we look at how to use one of the 2 popular Scala testing frameworks.

The 2 big names when it comes to Scala testing are

ScalaTest

Specs2

I have chosen to use ScalaTest as it seems slightly more popular, when you do a Google search, and I quite liked the syntax. That said Specs2 is also very good. so if you fancy having a look at that you should.

SBT for ScalaTest

So what do we need to get started with ScalaTest. As always we need to grab the JAR, which we do using SBT.

You can alternatively import the members of the trait, a technique particularly useful when you want to try out matcher expressions in the Scala interpeter. Here’s an example where the members of Matchers are imported:

The other day I have a requirement to schedule something in my app to run at certain times, and at fixed intervals there after. Typically I would just solve this using either a simple Timer, or turn to my friend Reactive Extensions by way of Observable.Timer(..).

Thing is I decided to have a quick look at something I have always known about but never really used, for scheduling, which is Quartz.net, which actually does have some pretty good documentation up already:

A friend of mine Marlon Grech, has his own business and he has a nice parallax effect web site : http://www.thynksoftware.com/ and over there on his “contact us2 page, it has this very cool folding Google maps thing. I have wondered how it was done for a while now, today I decided to find out. A colleague of mine was like WHY!….Ashic if you are reading this, sorry but I think it is cool, but I promise you I will be back to trying to slay the world with sockets tomorrow, a slight distraction shall we say.

Not Really My Idea – Credit Where Credit Is Due

Now the code I present in this post is not my own at all, I have added the ability to toggle the folding of the map, but that really is all I have done. None the less, I think it is still of interest to describe how it was done, and worth a small write up. I think the original authors have done a great job, but did not really explain anything, so hopefully by the time you get to the end of this post, the effect will be a bit more familiar to you.

The Basic Idea

The idea is actually not too hard to grasp. There is a master DIV, which contains the actual Google map, this DIV has a VERY low opacity, so low you can’t actually see it. Then there are 6 other DIVS that get a slice of the original DIV, this is done by some clever margins, as can be seen in this image and the code that follows it:

Each slice of the original gets a certain margin applied, where the overflow for the 6 DIVS is hidden. So by moving the slice into the desired position by way of clever margin positioning the remaining portion of the map for that slice would not be seen, thanks to overflow being hidden. Sneaky

Last time we looked at using ZeroMQ to use a “Divide And Conquer” pattern to distribute work to a number of workers and then combine the results again.

Since I wrote that last post I have had a bit of think about this series of posts, and realised that nothing I can say here would be as good or as thorough as the guide, so I have has to rethink my strategy a bit for the posts that I may write on ZeroMQ from here on in. So rather than me regurgitate what has already been said by Pieter on the guide web site, I will instead only be writing about stuff that I think is new, or worthy of a post. Now this could mean that the posts are less frequent, but I hope when there is one it will be of more interest, than me just saying here is a NetMQ version of the “Paranoid Pirate Pattern”, go check this link at the guide for more information.

So where does that leave this series of posts? Well to be honest slightly in limbo, but I have also been in contact with Pieter Hintjens, who was kind enough to give me a little push into something that may be of interest.

Pieter notified by of a Actor Model that was part of the high level C library for ZeroMQ called “czmq”, which is not contained in the NetMQ GitHub repository. So I had a call with Pieter, and looked into that.

This post will discuss a very simple actor model, that I have written to work with NetMQ, Pieter has given it the once over, and I have also talked it through with a regular ZeroMQ user at work, so I think it an ok version of the original C ZeroMQ “czmq” version.

Where Is The Code?

As always before we start, it’s only polite to tell you where the code is, and it is as before on GitHub:

What Is An Actor Model?

Here is what Wikipedia has to same in the introduction to what an Actor Model is.

The actor model in computer science is a mathematical model of concurrent computation that treats “actors” as the universal primitives of concurrent digital computation: in response to a message that it receives, an actor can make local decisions, create more actors, send more messages, and determine how to respond to the next message received.

….

….

The Actor model adopts the philosophy that everything is an actor. This is similar to the everything is an object philosophy used by some object-oriented programming languages, but differs in that object-oriented software is typically executed sequentially, while the Actor model is inherently concurrent.

An actor is a computational entity that, in response to a message it receives, can concurrently:

send a finite number of messages to other actors;

create a finite number of new actors;

designate the behavior to be used for the next message it receives.

There is no assumed sequence to the above actions and they could be carried out in parallel.

Recipients of messages are identified by address, sometimes called “mailing address”. Thus an actor can only communicate with actors whose addresses it has. It can obtain those from a message it receives, or if the address is for an actor it has itself created.

The Actor model is characterized by inherent concurrency of computation within and among actors, dynamic creation of actors, inclusion of actor addresses in messages, and interaction only through direct asynchronous message passing with no restriction on message arrival order.

How I like to think of Actors is that they may be used to alleviate some of synchronization concerns of using shared data structures. This is achieved by your application code talking to actors via message passing/receiving. The actor itself may pass messages to other actors, or work on the passed message itself. By using message passing rather than using shared data structures, it may help to think of the actor (or any subsequent actors its send messages to) working on a copy of the data rather than working on the same shared structures. Which kind of gets rid of the need to worry about nasty things like lock(s) and any nasty timing issues that may arise from carrying out multi threaded code. If the actor is working with its own copy of the data then we should have no issues with other threads wanting to work with the data the actor has, as the only place that data can be is within the actor itself, that is unless we pass another message to a different actor. If we were to do that though the new message to the other actor would also be a copy of the data, so would also be thread safe.

I hope you see what I am trying to explain there, may be a diagram may help.

Multi Threaded Application Using Shared Data Structure

A fairly common thing to do is have multiple threads running to speed things up, but then you realise that your threads need to mutate the state of some shared data structure, so then you have to involve threading synchronization primitives (most commonly lock(..) statements, to create your user defined critical sections). This will work, but now you are introducing artificial delays due to having to wait for the lock to be released so you can run Thread X’s code.

To take this one step further, lets see some code that may illustrate this further, imagine we had this sort of data structure representing a very slim bank account

I have possible picked an example that you think may not actually happen in real life, and to be honest this scenario may not popup in real life, as who would do something as silly as crediting an account in one thread, and debiting it in another…we are all diligent developers, we would not let this one into the code would we?

To be honest whether the actual example has real world merit or not, the point remains the same, since we have more than one thread accessing a shared data structure, access to it must be synchronized, which is typically done using a lock(..) statement, as can be seen in the code.

Now don’t get me wrong the above code does work, as shown in the output below:

Perhaps there might be a more interesting way though!

Actor Model

The actor model, takes a different approach, where by message passing is used, which may involve some form of serialization as the messages are pass down the wire, which kind of guarantees no shared structures to contend with. Now I am not saying all Actor frameworks use message passing down the wire (serialization) but the code presented in this article does.

The basic idea is that each thread would talk to an Actor, and send/receive message with the actor.

If you wanted to get even more isolation, you could use thread local storage where each thread could have its own copy of the actor which it, and it alone talks to.

Anyway enough talking, I am sure some of you want to see the code right?

The Implementation

The idea is that the Actor itself may be treated like a ZeroMQ (NetMQ in my case) sockets, and may therefor be used to Send/Receive messages. Now you may be wondering if I send the Actor a message, who is listening to that message, and how is it that I am able to receive a message from the Actor?

The answer to that lies inside the implementation of the simple Actor framework in this post. Internally the Actor spins up another thread. Within that new thread is one end of a PairSocket where the Actor itself is the other end of the pipe which is also a PairSocket (recall I said the Actor is able to act as a socket). The Actor and the other end of the pipe communicate via message passing, and they use an in process (inproc) protocol to do so.

The initial message passed to the Actor forms some some of protocol that both the other end of the Actor pipe (i.e. the thread the Actor created ZeroMQ czmqimplementation), which I am calling the “shim” (borrowed from the) MUST know how to deal with the protocol that the user code sends via the Actor.

The way I have chosen to do this, is you (i.e. the user of the simple Actor library will need to create a “shim” handler class. This “shim” handler class may be a very simple protocol or a very complicated one (for the demo I have stuck to very simple ones), as long as it understands the message/command being sent from the Actor, and knows what to do with it. That is up to you to come up with, I have no silver bullet for that

One final thing to explain is that the “shim” handler may be passed some initial arguments (outside of the message passing) should you want to make use of this feature. It is something you may not want/need to use, but it is there should you want to use it.

Actor Code

Here is the code for the Actor itself, where it can be seen that it may be treated as a socket using the Send/Receive methods. The Actor also creates a new thread which is used to run the code in the shim handler. The Actor also creates the shim and the pair of PairSocket(s) for message passing.

namespace NetMQActors{///<summary>/// The Actor represents one end of a two way pipe between 2 PairSocket(s). Where/// the actor may be passed messages, that are sent to the other end of the pipe/// which I am calling the "shim"///</summary>publicclassActor : IOutgoingSocket, IReceivingSocket, IDisposable{privatereadonlyPairSocket self;privatereadonlyShim shim;privateRandom rand = newRandom();privateCancellationTokenSource cts = newCancellationTokenSource();

The shim represents the other end of the pipe, the shim essentially is a property bag, but it does hold a reference to the IShimHandler that the thread in the actual Actor will run. The IShimHandler is the one that MUST understand the protocol, and carry out any work.

namespace NetMQActors{///<summary>/// Shim represents one end of the in process pipe, where the Shim expects/// to be supplied with a <c>IShimHandler</c> that it would use for running the pipe/// protocol with the original Actor PairSocket the other end of the pipe ///</summary>publicclassShim : IDisposable{public Shim(IShimHandler shimHandler, PairSocket pipe){this.Handler = shimHandler;this.Pipe = pipe;}

An Example : Simple EchoShim Handler

This example shows how to create a simple echo shim handler that can be used with standard actor code above. The EchoShimHandler presented here, uses an EXTREMELY simple protocol is does the following:

It expects the initial arguments to be 1 in length

It expects the initial argument element 0 to be “Hello World”

It expects a multi part message, where the 1st message frame is the command string “ECHO”

If all of those criteria are satisfied, then the EchoShimHandler will write the its end of the PairSocket pipe. The user of the Actor at the other end of the pipe (i.e. the other PairSocket), can then receive the value from the EchoShimHandler. Remember in this mini library the Actor may act as a regular NetMQ socket.

namespace NetMQActors{///<summary>/// This hander class is specific implementation that you would need/// to implement per actor. This essentially contains your commands/protocol/// and should deal with any command workload, as well as sending back to the/// other end of the PairSocket which calling code would receive by using the/// Actor classes various RecieveXXX() methods////// This is a VERY simple protocol but it just demonstrates what you would need/// to do to implement your own Shim handler///</summary>publicclassEchoShimHandler : IShimHandler{publicvoid Run(PairSocket shim, object[] args, CancellationToken token){if (args == null || args.Count() != 1 || (string)args[0] != "Hello World")thrownewInvalidOperationException("Args were not correct, expected 'Hello World'");

while (!token.IsCancellationRequested){//Message for this actor/shim handler is expected to be //Frame[0] : Command//Frame[1] : Payload////Result back to actor is a simple echoing of the Payload, where//the payload is prefixed with "ECHO BACK "NetMQMessage msg = null;

//this may throw NetMQException if we have disposed of the actor//end of the pipe, and the CancellationToken.IsCancellationRequested //did not get picked up this loop cyclemsg = shim.ReceiveMessage();

And here is the Actor test code that goes with this, where it can be seen that we are able send/receive using the Actor. There is also an example here that shows us trying to use a previously disposed Actor, which we expect to fail, and it does

//Round 2 : Should NOT work, as we are now using Disposed actortry{Console.WriteLine("ROUND2");Console.WriteLine("========================");actor.SendMore("ECHO");actor.Send("This is a string");result = actor.ReceiveString();}catch (NetMQException nex){Console.WriteLine("NetMQException : Actor has been disposed so this is expected\r\n");}

Another Example : Sending JSON Objects

This example shows how to create a simple account shim handler that can be used with standard actor code above. The AccountShimHandler presented here, uses another simple protocol (on purpose, you may choose to make this as simple or as complex as you wish) is does the following:

It expects the initial arguments to be 1 in length

It expects the initial argument element 0 to be a JSON serialized string of an AccountAction

It expects a multi part message, where the 1st message frame is the command string “AMEND ACCOUNT”

It expects the 2nd message frame to be a JSON serialized string of an Account

If all these criteria are met, then the AccountShimHandler deserialize the JSON Account object into an actual Account object, and will either debit/credit the Account that was passed into the AccountShimHandler, and then serialize the modified Account object back into JSON and send it back to the Actor via the PairSocket in the AccountShimHandler

namespace NetMQActors{///<summary>/// This hander class is specific implementation that you would need/// to implement per actor. This essentially contains your commands/protocol/// and should deal with any command workload, as well as sending back to the/// other end of the PairSocket which calling code would receive by using the/// Actor classes various RecieveXXX() methods////// This is a VERY simple protocol but it just demonstrates what you would need/// to do to implement your own Shim handler///</summary>publicclassAccountShimHandler : IShimHandler{

while (!token.IsCancellationRequested){//Message for this actor/shim handler is expected to be //Frame[0] : Command//Frame[1] : Payload////Result back to actor is a simple echoing of the Payload, where//the payload is prefixed with "AMEND ACCOUNT"NetMQMessage msg = null;

//this may throw NetMQException if we have disposed of the actor//end of the pipe, and the CancellationToken.IsCancellationRequested //did not get picked up this loop cyclemsg = shim.ReceiveMessage();

Which gives a result something like this, if you read the code above you will see the Account object we send, and the receive are NOT the same object. This is due to the fact they have been sent down the wire using NetMQ sockets.

Some Actor Frameworks To Look At

There are a couple of Actor frameworks out there that I am aware of. Namely the following ones, there will be more, but these are the main ones I am aware of.

So from here on in it is just a matter of going through some of the well known patterns from the ZeroMQ guide.

Now it would be immoral (even fraudulent) of me to not mention this up front, in the main, the information that I present in the remaining posts in this series of posts, will be based quite heavily on the ZeroMQ guide by Pieter Hintjens. Pieter has actually been in touch with me regarding this series of posts, and has been kind enough to let me run each new post by him. I think is generous, and I am extremely pleased to have Pieter on hand, to run them past. What that means to you, is that if there are any misunderstandings/mistakes on my behalf, I am sure Pieter will be pointing them out (at which point I will obviously correct any mistakes made, hopefully I will not make any). So big thanks go out to Pieter, cheers as would say in England.

It is all good publicity for ZeroMQ though, and as NetMQ is a native port it is not one of the ones covered by the language bindings on the ZeroMQ guide site. So even though I am basing my content on the fantastic work done by Pieter, it will obviously be using NetMQ, so from that point of view the code is still very much relevant.

Where Is The Code?

As always before we start, it’s only polite to tell you where the code is, and it is as before on GitHub:

What Will We Be Doing This Time?

This time we will continue to look at ZeroMQ patterns. Which is actually what the remaining posts will all pretty much be focussed on.

The pattern that we will look at this time involves dividing a problem domain into smaller chunks and distributed them across workers, and then collating the results back together again.

This pattern is really a “divide and conquer” one, but it has also been called “Parallel Pipeline”. With all the remaining posts, I will be linking back to the original portion of the guide such that you can read more about the problem and Pieter’s solution.

The idea is that you have something that generates work, and then distributes the work out to n-many workers. The workers each do some work, and push their results to some other process (could be a thread too) where the workers’ results are accumulated.

In the ZeroMQ guide, it shows an example that has the work generator just tell each worker to sleep for a period of time. I toyed with creating a more elaborate example than this, but in the end felt that the examples simplicity was quite important, so have stuck with the workload for each worker just being a value that tells the work to sleep for a number of Milliseconds (thus simulating some actual work). This as I say has been borrowed from the ZeroMQ guide.

In real life the work could obviously be anything, though you would more than likely want the work to be something that could be cut up and distributed without the work generator caring/knowing how many workers there are.

Console.WriteLine("Press enter when worker are ready");Console.ReadLine();//the first message it "0" and signals start of batch//see the Sink.csproj Program.cs file for where this is usedConsole.WriteLine("Sending start of batch to Sink"); sink.Send("0");

//process tasks foreverwhile (true){//workload from the vetilator is a simple delay//to simulate some work being done, see//Ventilator.csproj Proram.cs for the workload sent//In real life some more meaningful work would be donestring workload = receiver.ReceiveString();

The Ventilator uses a NetMQPushSocket to distribute work to the workers, this is referred to as load balancing

The Ventilator and the Sink are the static parts of the system, where as workers are dynamic. It is trivial to add more workers, we can just spin up a new instance of a worker, and in theory the work gets done quicker.

We need to synchronize the starting of the batch (when workers are ready), as if we did not do that, the first worker that connected would get more messages that the rest, which is not really load balanced

The Sink uses a NetMQPullSocket to accumulate the results from the workers

Last time we looked at how to use the Poller to work with multiple sockets, and detect their readiness. This time we will continue to work with the familiar request/response model that we have been using thus far. We will however be beefing things up a bit, and shall examine several ways in which you can have more than one thread pushing messages to the server and getting responses, which is a fairly typical requirement (at least in my book it is).

Where Is The Code?

As always before we start, it’s only polite to tell you where the code is, and it is as before on GitHub:

One Thing Before We Start

As you may have realised by now, ZeroMQ is a messaging library, and as such, promotes the idea of lock free messaging. I also happen to think this is a very good idea. You can achieve an excellent throughput of messages and save yourself a lot of synchronization pain, if you try and avoid shared data structures. By doing this you will also be saving yourself the pain of having to synchronize access to them. So in general try and work with ZeroMQ in the way it wants to be worked with, which is via message passing, and avoiding locks, shared data structures.

Setting The Scene For This Post

Ok so we are nearly at the point where we can start to look at some code, but before we do that, let’s just talk a little bit more about what this post is trying to discuss.

In the code I typically write, it is quite common for a bunch of client threads all to be running at once, each capable of talking to the server. If this sounds like a requirement that you have had to deal with, then you may find this post of use, as this is exactly the scenario this post is aimed at solving.

As the aim of this post is to have asynchronous client, we need a asynchronous server too, so we use DealerSocket(s) for the client(s) and a RouterSocket for the server.

As with most things there is more than one way to skin a cat, so we will look at a couple of options, each with the their own pros/cons.

Option 1 : Each Thread Has It’ Own DealerSocket

The first options does need a bit of .NET threading knowledge, but if you have that, then the idea is a simple one. For each client thread we also create a dedicated DealerSocket that *should be* used exclusively by that thread.

This is achieved using the ThreadLocal<T> .NET class, which allows us to have a DealerSocket per thread. We add each of the client created DealerSocket(s) to a Poller instance, and listen to the ReceieveReady event on each socket, which allows us to get the message back from the server.

The obvious downside to this approach is that there will be more socket(s) created on the client side. The upside is that it is very easy to implement, and just works.

//NOTES//1. Use ThreadLocal<DealerSocket> where each thread has// its own client DealerSocket to talk to server//2. Each thread can send using it own socket//3. Each thread socket is added to pollerThreadLocal<DealerSocket> clientSocketPerThread = newThreadLocal<DealerSocket>();int delay = 3000;Poller poller = newPoller();

//start some threads, each with its own DealerSocket//to talk to the server socket. Creates lots of sockets, //but no nasty race conditions no shared state, each //thread has its own socket, happy daysfor (int i = 0; i < 3; i++){Task.Factory.StartNew((state) =>{DealerSocket client = null;

Option 2 : Each Thread Delegates Of To A Local Broker

The next example keeps the idea of a separate threads that want to send message(s) to the server. This time however we will use a broker on the client side. The idea being that the client threads will push to a shared data queue, I know I have told you to avoid shared data structures. Thing is, this is not a shared data structure it is just a thread safe queue, that many threads can write to. Where as a a shared data structure may mean several threads all trying to update the current Bid rate of an Fx option quote price. There is a difference. OK the shared queue will have some synchronization somewhere to make it thread safe, thankfully we can rely on the good work of the PFX team at Microsoft for that. Those guys are smart and I am sure the Concurrent collections namespace is pretty well designed and can be trusted to be pretty optimal.

Again we need to call on a bit of .NET know how, so for the centralized queue we use a ConcurrentQueue<T>. All client threads will enqueue their messages for the server here.

There will also be another thread started. This extra thread is the one that will be processing the messages that have been queued onto the centralized queue. When there is a message taken of the centralized queue it will be sent to the server. The thing is only the thread that reads from the centralized queue will send messages to the server.

As we still want messages to be sent out asynchronously we stick with using a DealerSocket, but since their is now only one place where we send messages to the server we only need a single DealerSocket.

We add the SINGLE DealerSocket(s) to a Poller instance, and listen to the ReceieveReady event on each socket, which allows us to get the message back from the server.

This is more complex than the first example as there are more moving parts, but we no longer have loads of sockets being create. There is just one.

namespace ConcurrentQueueDemo{publicclassProgram{publicvoid Run(){//NOTES//1. Use many threads each writing to ConcurrentQueue//2. Extra thread to read from ConcurrentQueue, and this is the one that // will deal with writing to the serverConcurrentQueue<string> messages = newConcurrentQueue<string>();int delay = 3000;Poller poller = newPoller();

//start some threads, where each thread, will use a client side//broker (simple thread that monitors a CooncurrentQueue), where//ONLY the client side broker talks to the serverfor (int i = 0; i < 3; i++){Task.Factory.StartNew((state) =>{while (true){messages.Enqueue(state.ToString());Thread.Sleep(delay);}

Option 3 : Use NetMQScheduler

The final option is to use the NetMQ library class : NetMQScheduler. I think the best place to start with that is by reading the link I just included. Then come back here.

…….

…….

Time passes

…….

…….

Oh hello you’re back. Ok so now you know that the NetMQScheduler offers us a way to use TPL to schedule work and that there is a Poller that we pass into the NetMQScheduler. Cool.

The NetMQScheduler is a custom TPL scheduler, which allows us to create tasks that we want done, and it will take care of the threading aspects of them. Since we told the NetMQScheduler about the Poller we want to use we are able to hook up the ReceiveReady event and use that to get messages back from the server.

The difference here is that since we are using TPL and NetMQ we need to use TPL Task(s) and the NetMQScheduler instance whenever we want to Send/Receive.

To be honest, I think I like this design the least, as it mixes up too many concepts, and the TPL stuff tends to be mixing a bit too much with the ZeroMQ goodness for my taste. I did however just want to show this example for completeness.

So the code for this example has two parts. A simple client, and then the code that spins up a client instance and then multiple threads that use the client instance to send messages to the server. There is also a basic server loop (which I will show below under the title “The Rest”)

Client Code

Here is the client code, where it can be seen that we create a NetMQScheduler which gets handed a new Poller instance to use internally. The idea is that anyone can send a message simply by calling the clients SendMessage(..) method

publicasyncTask SendMessage(NetMQMessage message){// instead of creating inproc socket which listen to messages and then send //to the server we just creating task and run a code on// the poller thread which the the thread of the clientSocketTask task = newTask(() => clientSocket.SendMessage(message));task.Start(scheduler);await task;await ReceiveMessage();}

namespace NetMQSchedulerDemo{publicclassProgram{publicvoid Run(){//NOTES//1. Use NetMQs NetMQScheduler to communicate with the // server. All Send/Receive MUST be done via the // NetMQScheduler and TPL Tasks. See the Client class // for more information on this

Where Is The Code?

Handling Multiple Sockets, And Why Would You Need To?

So why would you want to handle multiple sockets anyway? Well there are a variety of reasons, such as:

You may have multiple sockets within one process that rely on each other , and the timings are such that you need to know that the socket(s) are ready before it/they can receive anything

You may have a Request, as well as a Publisher socket in one process

To be honest there times you may end up with more than one socket per process. And there may be occasions when you only want to use the socket(s) when they are deemed ready.

ZeroMQ actually has a concept of a “Poller” that can be used to determine if a socket is deemed ready to use.

NetMQ has an implementation of the “Poller”, and it can be used to do the following things:

Monitor a single socket, for readiness

Monitor a IEnumerable<NetMQSocket> for readiness

Allow NetMQSocket(s) to be added dynamically and still report on the readiness of the new sockets

Allow NetMQSocket(s) to be remove dynamically

Raise a event on the socket instance when it is ready

A good way to look into the NetMQPoller class is via some tests. I am not going to test everything in this post, but if you want more, NetMQ itself comes with some very very good tests for the Poller. Which is in fact where I lifted these test cases from.

Some Examples

As I just stated I am not the author of these tests, I have taken a subset of the NetMQ Poller test suite, that I think may be pertinent to a introductory discussion around the Poller class.

NOTE : This series of posts is meant as a beginners guide, and advanced ZeroMQ users would likely not get too much from this series of posts.

Single Socket Poll Test

This test cases use the kind of familiar (hopefully by now) Request/Response socket arrangement. We will use the Poller to alert us (via the xxxxSocket.ReceiveReady event that the Poller raises for us) that the ResponseSocket is Ready.

Add Socket During Work Test

This example shows how we can add extra socket(s) to the Poller at runtime, and the Poller will still raise the xxxxSocket.ReceiveReady event for us

[Test]publicvoid AddSocketDuringWorkTest(){using (NetMQContext contex = NetMQContext.Create()){// we are using three responses to make sure we actually //move the correct socket and other sockets still workusing (var router = contex.CreateRouterSocket())using (var router2 = contex.CreateRouterSocket()){router.Bind("tcp://127.0.0.1:5002");router2.Bind("tcp://127.0.0.1:5003");

Cancel Socket Test

This final example shows 3 RouterSockets connected to 3 DealerSockets respectively (we will talk about DealerSocket(s) in a later post, for now you can think of them as typically being used for asynchronous workers). We then add all the routers to the Poller. Within the 1st RouterSocket.ReceiveReady we remove the RouterSocket from the Poller, so it should not receive any more messages back from its respective DealerSocket. Here is the code for this test :

Where Is The Code?

Socket Options

Depending on the type of sockets you are using, or the topology you are attempting to create, you may find that you need to set some ZeroMQ options. In NetMQ this is done using the xxxxSocket.Options property.

Here is a listing of the available properties that you may set on a xxxxSocket. It is hard to say exactly which of these values you may need to set, as that obviously depends entirely on what you are trying to achieve. All I can do is list the options, and make you aware of them. So here they are

Affinity

BackLog

CopyMessages

DelayAttachOnConnect

Endian

GetLastEndpoint

IPv4Only

Identity

Linger

MaxMsgSize

MulticastHops

MulticastRate

MulticastRecoveryInterval

ReceiveHighWaterMark

ReceiveMore

ReceiveTimeout

ReceiveBuffer

ReconnectInterval

ReconnectIntervalMax

SendHighWaterMark

SendTimeout

SendBuffer

TcpAcceptFilter

TcpKeepAlive

TcpKeepaliveCnt

TcpKeepaliveIdle

TcpKeepaliveInterval

XPubVerbose

To see exactly what all these options mean you will more than likely need to refer to the actual ZeroMQ documentation, i.e the guide.

Identity

One of the great things (at least in my opinion) when working with ZeroMQ is that we can still stick with a standard request/response arrangement (just like we had in the 1st posts hello world example https://sachabarbs.wordpress.com/2014/08/19/zeromq-1-introduction/) but we may then choose to switch to having an asynchronous server. This is easily achieved using a RouterSocket for the server. The clients stay as RequestSocket(s).

So this is now an interesting arrangement, we have

Synchronous clients, thanks to standard RequestSocket type

Asynchronous server, thanks to new socket called RouterSocket

The RouterSocket is a personal favourite of mine, as it is very easy to use (as are many of the ZeroMQ sockets, once you know what they do), but it is a capable of creating a server that can seamlessly talk to 1000nds of clients, all asynchronously, with very little changes to the code we saw in part 1.

Slight Diversion

When you work with RequestSocket(s), they do something clever for you, they always provide a message that has the following frames:

Frame[0] address

Frame[1] empty frame

Frame[2] the message payload

Even though all we did was send a payload (look at the “Hello World” example in part1)

Likewise when you work with ResponseSocket(s), they also do some of the heavy lifting for us, where they always provide a message that has the following frames:

Frame[0] return address

Frame[1] empty frame

Frame[2] the message payload

Even though all we did was send a payload (look at the “Hello World” example in part1)

By understanding how the standard synchronous request/response socket works, it is now fairly easy to create a fully asynchronous server using the RouterSocket, that knows how to dispatch messages back to the correct client. All we need to do is emulate how the standard ResponseSocket works, where we construct the message frames ourselves. Where we would be looking to create the following frames from the RouterSocket (thus emulating the behaviour of the standard ResponseSocket)

Frame[0] return address

Frame[1] empty frame

Frame[2] the message payload

I think the best way to understand this is via an example. The example works like this:

There are 4 clients, these are standard synchronous RequestSocket(s)

There is a single asynchronous server, which uses a RouterSocket

If the client sends a message with the prefix “B_” it gets a special message from the server, all other clients get a standard response message

I think to full appreciate this example, one needs to examine the output, which should be something like this (it may not be exactly this, as the RouterSocket is FULLY async, so it may deal with RequestSocket(s) in a different order for you:

SendMore

ZeroMQ works using message frames. Using ZeroMQ you are able to create multipart messages which you may use for a variety of reasons, such as

Including address information (which we just saw an example of above actually)

Designing a protocol for your end purpose

Sending serialized data (for example the 1st message frame could be the type of the item, and the next message frame could be the actual serialized data)

When you work with multipart messages you must send/receive all the parts of the message you want to work with.

I think the best way to try and get to understand multipart message is perhaps via a small test. I have stuck to use a all in one demo, which builds on the original “Hello World” request/response demo. We use NUnit to do Asserts on the data between the client/server.

Here is a small test case, where the following points should be observed

We construct the 1st message part and use the xxxxSocket.SendMore() method, to send the 1st message

We construct the 2nd (and final) message part using the xxxxSocket.Send() method

The Server is able to receive the 1st message part, and also assign a value to determine if there are more parts. Which is done by using an overload of the xxxxSocket.Receive(..) that allows us to get an out value for “more”

We may also use an actual NetMqMessage and append to it, which we can then send using xxxxSocket.SendMessage, where the receiving socket would use xxxxSocket.ReceieveMessage(..) and can examine the actual NetMqMessage frames

//server send message, this time use NetMqMessage//which will be sent as frames if the client calls//ReceieveMessage()var m3 = newNetMQMessage();m3.Append("From");m3.Append("Server");server.SendMessage(m3);

Here are a couple of REALLY important points from the Zero Guide when working with SendMore and multi part messages, this talks about the ZeroMQ C++ core implementation, not the NetMQ version, but the points are just as valid when using NetMQ.

Last time we introduced ZeroMQ and also talked about the fact that there was a native C# port by way of the NetMQ library, which as I said we will be using from here on out. I also mentioned that the power of ZeroMQ comes from a bunch of pre-canned sockets, which you can use as building blocks to build massive or small topologies.

From the ZeroMQ guide it states this:

The built-in core ØMQ patterns are:

Request-reply, which connects a set of clients to a set of services. This is a remote procedure call and task distribution pattern.

Pub-sub, which connects a set of publishers to a set of subscribers. This is a data distribution pattern.

Pipeline, which connects nodes in a fan-out/fan-in pattern that can have multiple steps and loops. This is a parallel task distribution and collection pattern.

Exclusive pair, which connects two sockets exclusively. This is a pattern for connecting two threads in a process, not to be confused with “normal” pairs of sockets.

There are of course certain well known patterns, that the chaps that wrote Zero have come across, and talk about in the book and the guide (links here again in case you missed them last time)

I personally think it is a very clever thing to have done, to give certain sections of topologies an actual pattern name, as it means you can Google around a certain pattern name. For example I may Google “Lazy Pirate Pattern C#”, and I would know that the results would almost certainly be talking about the exact socket arrangement I had in mind. So yeah good idea giving these things names.

Standard ZeroMQ Socket Types

Anyway enough chit chat, lets get to the crux of what I wanted to talk about this time, which is the different socket types within ZeroMQ.

Zero actual has the following socket types:

PUB

This is known as a PublisherSocket in NetMQ, and can be used to publish messages.

SUB

This is known as a SubscriberSocket in NetMQ, and can be used to subscribe to message(s) (you can fill in a subscription topic which indicates which published messages you care about)

XPUB

This is known as a XPublisherSocket in NetMQ and can be used to publish messages. XPUB and XSUB are used where you may have to bridge different networks.

XSUB

This is known as a XSubscriberSocket in NetMQ. and can be used to subscribe to message(s) (you can fill in a subscription topic which indicates which published messages you care about). XPUB and XSUB are used where you may have to bridge different networks.

REQ

This is known as a RequestSocket in NetMQ. Is a synchronous blocking socket, that would initiate a request message.

REP

This is known as a ResponseSocket in NetMQ. Is a synchronous blocking socket, that would provide a response to a message.

ROUTER

This is known as a RouterSocket in NetMQ. Router is typically a broker socket (but not limited to), and provides routing where it would more than likely know how to route messages back to the calling socket, thus its name of “Router”. It is fully asynchronous (non blocking).

The ROUTER socket, unlike other sockets, tracks every connection it has, and tells the caller about these. The way it tells the caller is to stick the connection identity in front of each message received. An identity, sometimes called an address, is just a binary string with no meaning except “this is a unique handle to the connection”. Then, when you send a message via a ROUTER socket, you first send an identity frame.

This is known as a DealerSocket in NetMQ. Dealer is typically a worker socket, and doesn’t provide any routing (ie it doesn’t know about the calling sockets identity), but it is fully asynchronous (non blocking)

PUSH

This is known as a PushSocket in NetMQ. This would typically be used to push messages at worker, within a pipeline pattern

PULL

This is known as a PullSocket in NetMQ. This would one part of a work within a pipeline pattern, which would pull from a PUSH socket and the n do some work.

Standard ZeroMQ Socket Pairs

There are pretty strict recommendations about the pairing of the sockets we just discussed. The standard pairs of sockets that you should stick to using are shown below.

Any other combination will produce undocumented and unreliable results, and future versions of ZeroMQ will probably return errors if you try them

PUB and SUB

A standard Pub/Sub arrangement

XPUB and XSUB

A standard Pub/Sub arrangement

REQ and RES

A standard synchronous request/response arrangement

The REQ client must initiate the message flow. A REP server cannot talk to a REQ client that hasn’t first sent it a request. Technically, it’s not even possible, and the API also returns an EFSM error if you try it.

A standard synchronous request with an asynchronous server responding, where the router will know how to do the routing back the correct request socket

In the same way that we can replace REQ with DEALER ….we can replace REP with ROUTER. This gives us an asynchronous server that can talk to multiple REQ clients at the same time. If we rewrote the “Hello World” server using ROUTER, we’d be able to process any number of “Hello” requests in parallel.

We can use ROUTER in two distinct ways:

As a proxy that switches messages between frontend and backend sockets.

As an application that reads the message and acts on it.

In the first case, the ROUTER simply reads all frames, including the artificial identity frame, and passes them on blindly. In the second case the ROUTER must know the format of the reply envelope it’s being sent. As the other peer is a REQ socket, the ROUTER gets the identity frame, an empty frame, and then the data frame.

An asynchronous request with a synchronous server responding. When we use a standard REQ (ie not a DEALER for the client) socket, it does one extra thing for us, which is to include an empty frame. So when we switch to using a Dealer for the client, we need to do that part ourselves, by using SendMore, which we will get into within the next post.

If we rewrote the “Hello World” client using DEALER, we’d be able to send off any number of “Hello” requests without waiting for replies.

When we use a DEALER to talk to a REP socket, we must accurately emulate the envelope that the REQ socket would have sent, or the REP socket will discard the message as invalid. So, to send a message, we:

Send an empty message frame with the MORE flag set; then

Send the message body.

And when we receive a message, we:

Receive the first frame and if it’s not empty, discard the whole message;

An asynchronous request with an asynchronous server responding, where the router will know how to do the routing back the correct request socket

With DEALER and ROUTER to get the most powerful socket combination, which is DEALER talking to ROUTER. It gives us asynchronous clients talking to asynchronous servers, where both sides have full control over the message formats.

Because both DEALER and ROUTER can work with arbitrary message formats, if you hope to use these safely, you have to become a little bit of a protocol designer. At the very least you must decide whether you wish to emulate the REQ/REP reply envelope. It depends on whether you actually need to send replies or not.

An asynchronous request with an asynchronous server responding (this should be used if the DEALER is talking to one and only one peer).

With a DEALER/DEALER, your worker can suddenly go full asynchronous, sending any number of replies back. The cost is that you have to manage the reply envelopes yourself, and get them right, or nothing at all will work. We’ll see a worked example later. Let’s just say for now that DEALER to DEALER is one of the trickier patterns to get right, and happily it’s rare that we need it.