There is a growing change in the software world these days, due to increase of IoT devices. We live in a world where real time information is important, and one of the challenges that software projects dealing with is how to connect IoT devices together and send messages between them (publish subscribe notifications).

In this post I want to share the concept of using RabbitMQ and MQTT protocol as publish subscribe message broker, in order to connect IoT devices and software applications together.

Understanding messages in term of RabbiMQ:

Messaging in RabbitMQ hides the sender and receiver from each other, as a developers, we have a flexible infrastructure that encourages decoupling of our applications. In addition the messages have no set structure and can even store binary data directly.

MQTT stands for MQ Telemetry Transport. It is a publish/subscribe, extremely simple and lightweight messaging protocol, designed for constrained devices and low-bandwidth, high-latency or unreliable networks. The design principles are to minimize network bandwidth and device resource requirements whilst also attempting to ensure reliability and some degree of assurance of delivery. These principles also turn out to make the protocol ideal of the emerging “machine-to-machine” (M2M) or “Internet of Things” world of connected devices, and for mobile applications where bandwidth and battery power are at a premium.

MQTT is just a protocol specification; therefore we need a supported service that implements this protocol. RabbitMQ is the delivery service for the MQTT protocol. So every application can receive messages and send messages. RabbitMQ is just the MQTT router between each IoT device, the server and his clients.

MQTT Terminology:

Client: Any publisher or subscriber that connects to the RabbitMQ over a network is considered to be the client. Both publishers and subscribers are called as clients since they connect to the centralized RabbitMQ service.

Following example demonstrate how to create connection with RabbitMQ and MQTT with .NET C#:

Topic: A topic is the message label that the clients subscribes to connect each other. Client can subscribe/publish message by topic. The topic enables RabbitMQ detect who is the endpoit adress for the messages.

Payload: payload is the data content of the message. The payload can be anything from JSON to MPEG-4 or even a binary data.

Networking and RabbitMQ:

Clients communicate with RabbitMQ over TCP/IP protocol. The standard port of RabbiMQ is 5672. For secure communication it is possible to encrypt connections using TLS with RabbitMQ. Authentication using peer certificates is also possible.

Latency and RabbitMQ:

RabbitMQ instance may be able to handle the message throughput generated by IoT application, but what happened when 1,000,000 messages published per second? If the clients will be fast enough to process the messages, Rabbit message que will eventually be empty. In other hand let say it takes 50ms for Rabbit to take a message from this queue, put it on the network and for it to arrive at the consumer. And it takes 4ms for the client to process the message. Once the consumer has processed the message, it sends an ack back to Rabbit, which takes a further 50ms to be sent to and processed by Rabbit. So we have a total round trip time of 104ms. 104ms latency for each 1,000,000 messages can cause the RabbitMQ to over the memory threshold and disable all publishing messages. disable publish messages can cause RabbitMQ clientns lot issues. Therefore luckily, RabbitMQ comes with built-in clustering solution that can satisfy both problems and make sure IoT app always has a Rabbit to talk to.

RabbitMQ and MQTT plugin enabled, is the most preferred protocol for IoT applications. Furthermore RabbitMQ can be preferred choice for push notifications platform from .NET server to his clients. By sending JSON message content, each client can publish and receive any kind of message structure, in a fancy decouple way.

Scalability is the ability of a system to expand to meet your business needs. You scale a system by adding extra hardware or by upgrading the existing hardware without changing much of the application. In the context of a BizTalk Server system, scalability refers to the ability of BizTalk to scale as your throughput needs increase, and if you need to reduce latency times.

In ASP.NET each HTTP request is handled by the Thread Pool. The Thread Pool gets the HTTP request, and if there is no available thread in the pool, the Thread Pool will create a new thread in order to process the request.

For example let say that the server is responsible to manage music library, and the client wants to get specific album from the server.
In this case, the client request needs to get the result from other server, aka SQL server data base.

If the server send request to the data base synchronously then the current thread will block until the data will back from the data base. Blocking threads can hit the scalability of the server.

This scenario cause the server waist most of the time for creating threads, block the threads, and context switching.

Improve scalability of the server with asynchronous calls:

Instead of issue the request for the data base synchronously and block the thread until the data comes back from the data base. We can use the async await keywords in order to create asynchronous request against the data base or any other remote server.

This example creates IO bound request against SQL server asynchronously. That allows this current running thread to go back to the thread pool when waiting to SQL response, instead of blocking the thread. Then the thread pool can use this thread to handle other client requests.

Synchronization:

We all know that Synchronization used to prevent data corruption when two threads access shared data at the same time. By using the lock statement we ensure that only one thread can update the shared data at a time. This scenario can impact application performance and scalability.The problem of lock statement:

At the first glance of this code, its looks very good, the finally statement guaranteed that the Monitor release the lock, and this prevent the application from get into dead lock. But it can corrupt data in the application. For example let’s say that inside the lock statement the code try to execute money transfer transaction between two bank accounts. If exception is thrown right after subtract from first account and before add the amount to second account, the finally will release the lock, the bank accounts will stay with corrupted data, and the application will continue to run.
Therefore we must to catch any exception inside the lock statement in order to avoid corrupt the data.

Asynchronous Lock:

Synchronize shared data by using the lock statement can block the current thread. This can hit the scalability of the server. Therefore in order to increase the scalability of the server we need to avoid blocking threads. But we still have to synchronize our shared data.
In .NET 4.5 we can use the SemaphoreSlim.WaitAsync method in order to synchronize our data without blocking threads.

In this post I’m going to explore the Caliburn.Micro features over Prism features.
What are the advantages and disadvantages of each framework, and when using Caliburn.Micro micro is better than using Prism.

Features:

Binding:

Caliburn.Micro enables us to bind the ViewModel properties to the View, based on conventions. When Prism using the ordinary WPF binding mechanism.

The best example to describe this feature is Command binding between View and ViewModel, Caliburn.Micro enables us to bind the Commands to the ViewModel handlers without using the ICommand implementation.

In this example, the names of View controls must be identical to the ViewModel properties names.

View first or View Model First:

In theory the ViewModel not knowing what the view is look like. And we can create multiple views for a single ViewModel.

Prism pushes us toward the “View First” construction, in which the View is Instantiated first and then the ViewModel. In other hand, Caliburn.Micro prefers ViewModel- First.Both framework allows the “View First” or the ViewModel- First favors.

ViewModelLocator (Prism):

Because Prism is “View First” favors, one of the new Prism 5.0 feature is the ViewModelLocator. This feature locate and instantiate the view model into the DataContext of the View based on conventions.
In order to apply this feature we need to define the prism:ViewModelLocator.AutoWireViewModel=”True” attached property into our view.

ViewLocator (Caliburn.Micro):

Cliburn.Micro provides by default a strategy to decide which View to use for a given ViewModel.
ViewLocator class is responsible for instantiate the relevant View for a given ViewModel.

Conductor (Caliburn.Micro):

A Conductor is simply a ViewModel which owns another ViewModel, and knows how to manage its lifecycle. Imagine that we have a master details application, and we want to display different views depend on the data we clicked on.
Conductor is the way to implement that behavior.

In this case MainViewModel is the conductor of the details view models.
Whenever item the user selects, the Conductor is going to activate and deactivate the relevant ViewModel.

Navigating between Views in same Region (Prism):

If we go back to our master details example above, Prism let us to display different views in the same region by the RegionManager. Prism regions are essentially named placeholders within which views can be displayed, In this case we don’t need the MainViewModel to own other view models; we just open and close different views in the same region.

Performance perspective of Conductor and Region manager:

It is often more efficient to update an existing view instead of replace it with a new instance of the same view. Bothe framework provides mechanism to navigate between views.

When we use the Conductor pattern of Caliburn.Micro, we can re-use the instantiation of the ViewModel, by resolving the instance from the container.
And if we want to reuse the instantiation of the View, we also need to register the view as singleton in the container, The ViewLocator try to resolve the view from the container first, and if does not resolved, the View instantiate again.
Prism provides the ability to hide and show already instantiated views and view models.

Both frameworks enable to publish and subscribe events in a loosely based fashion.

Each framework implements EventAggregator mechanism differently.For example: Prism force us to inherit from CompositeWpfEvent<> in order to create an event.
Publishers use the Publish method; subscribers use the Subscribe method of the CompositeWpfEvent object.Prism also let the subscribers to filter the event before the registered handler is called.

With Caliburn.Micro every object can be published by EventAggregator as event. IEventAggregator interface of Caliburn.Micro provides us Publish and Subscribe methods.

Subscribers need to implement the Handle method of IHandle interface and call the_eventAggregator.Subscribe(this) in the constructor.

IOC Container support:

Both frameworks provide the ability to use any kind of IOC Container.

Caliburn.Micro provides built in Dependency Injection container called SimpleContainer.

If we want to use other IOC container, like Castle Windsor, Unity or any other container, we can register required services by override SelectAssemblies(), PrepareApplication() and Configure() methods of BootstrapperBase object.

UnityBootstrapper, and MefBootstrapper extension packages of Prism, provides the basic bootstrapping sequences that register most of Prism services we need.

It is very simple to use those extensions.

Modularity:

We all know the benefits of building “modular” application instead “monolithic” application. By create “modular” application we can easily re use modules, the application become easily more flexible, and extendible. But on the other hand building “modular” application requires us to invest more time and resources to achieve those benefits.

Prism provides best support for modular application development. Following link describe this topic very good:

Caliburn.Micro is lightweight compared to Prism.
Caliburn.Micro provides the benefits of not write binding expressions in XAML, but in complex cases we must write XAML binding expressions.

Prism provide better support for create “Modular” applications.
Prism allows managing and maintaining large-scale of enterprise projects easier with the helping tools Prism provides to build modular applications.
If we want to choose between the two frameworks, I think it depend what kind of application we going to create. For big-scale “modular” applications the best choice is Prism, and for small applications (one solution) Caliburn.Micro is the better choice.

Recently Microsoft released Visual Studio 2015 and the new C# 6 language.
I want to share with you my favorite features of C# 6.Null conditional operator:
Every developer hate to get NullReferenceException when the application running, this means we need to write a lot of null reference checks, and if the object have nested objects, the null reference check become very unreadable.
For Example:

The IDispose interface gives the programmer way to free unmanaged resources and events handlers, in order to avoid memory leaks.

For example:

public class UnmanagedResourceUser :IDisposable

{

public void Dispose()

{

//Free unmanaged resources

}

}

In this way we have precise control over when the unmanaged resources are free.

Implementing the IDispose interface does not give the programmer control when the GC work and free the object from memory.

the using syntax in c#, gives the programmer elegant way to call the Dispose method. When the programmer uses the using syntax, the CLR call the Dispose method automatically on the object that implement the IDispose interface.

For example:

using (UnmanagedResourceUser resource = new UnmanagedResourceUser())

{

//Do something

}

//Here Dispose method call automatically.

What is Finalize in .NET?

The GC call the Finalize method moment before the object removed from memory. In other words by override the Finalize method, the programmer can free unmanaged resources in the object.

For Example:

public class UnmanagedResourceUser : IDisposable

{

//Override the Finalize method

~UnmanagedResourceUser()

{

//Free unmanaged resources

}

public void Dispose()

{

//Free unmanaged resources

GC.SuppressFinalize(this);

}

}

In most cases there is no need to override the Finalize method (destructor). The only reason to override Finalize method is if the class use interop PInvoke or complex COM object.

Why we have both? IDispose and Finalize.

If the programmer Implements only the IDispose interface, this does not guarantee that the Dispose method call when needed. Therefore in order to ensure the release of unmanaged resources the destructor (Finalize) is required.

If the Dispose method called, the programmer must avoid the destructor call by using the GC.SuppressFinalize.

GC.SuppressFinalize informs the CLR that is no longer necessary to call the Finalize method (destructor) because the Dispose method already released the unmanaged resources.

As we’ve seen, we the Task<T> start the asynchronous operation and the ContinueWith method register for the result callback.

However in .NET 4.5 and c#5, we can use the await keyword to write the same asynchronous programs.

For Example:

private async void ProcessDnsAdressAsync() {

IPAddress[] addresses = await Dns.GetHostAddressAsync(“msdn.com”);

}

So what’s the difference between them?

In .NET 4 the asynchronous method we intending to write have to split into two methods: the actual method and the callback. This creates code that is hard to maintain, debug and follow.

What happens if we want to call many asynchronous calls in a loop? The only option is to use recursive method, which is much harder to maintain and follow. That what the .NET4.5 solved.

In the case of UI application, that means we want to update the UI with the data that returned from asynchronous call. If we use the ContinueWith, we can’t update UI data within the ContinueWith callback, because the execution of the callback will not run on the UI thread. But on the other hand we can safety manipulate UI data after the await keyword, because the UI thread will continue executing after the await statement.

Conclusion:

Behind the scenes the compiler generate the same logic for both scenarios, but the async and await, give us very nice and maintainable code for asynchronous programming.