Azure, .NET, Programming, & Machine Learning

Azure

Serverless architecture is becoming one of popular enterprise architectures in this cloud era. Serverless allows developers to focus on coding and solving business problems and not worrying much on the infrastructure. In the context of Microsoft, an Azure Function is serverless compute service that enable users to run code on demand (C#, Javascript, Python, Java, etc.) without having to explicitly provision or manage an infrastructure. Your code can be kicked off to run through a trigger (httptrigger, servicebus trigger, blob trigger, etc.) or through a timer.

I would like to share a recent experience where I was called as a volunteer and help a non profit organization to architect their registration application. Due to some requirements, I decided to present a serverless architecture starting from the UI all the way to the database. We used the following Azure resources: Azure Web App, Azure SQL Database, and many Azure Functions.

Below is the architecture diagram presented:

Web UI
This is the public facing interface where users will be asked to register. Users will provide personal as well as billing information. After data entry is complete, the application makes an HTTP post call to the PaymentAPI which is an Azure Function.

PaymentAPI
This Azure function uses the Paypal API to validate and process the billing information. In case of failure a 500 HTTP status code is returned letting the user know that an error occurred. However, in case of success, the order object (without the credit card information) is then queued through a Service Bus that will trigger the next step which will process the registration.

RegistrationAPI
This is a Durable Function. Within a durable function, we can many functions using one of these patterns such as: chaining, fan in/fan out. In this case, I’m using a version of the fan in/out where I run 3 functions in parallel. These 3 function is needed to complete a registration:

SaveOrder:
The SaveOrder function as the name states, takes the order object and saves it in the Azure SQL database

SendEmailConfirmation:
As part the registration, we send an email confirmation letter to the attendee as a proof of registration. We used SendGrid with Razor template to generate the email body content.

AddEmailToMailingList:
This function saves the order email address to the non-profit organization mailing list, so that the user can get the next updated newsletter regarding the conference. For the company uses Mailchimp for their newsletter and email campaign. We use Mailchimp SDK which allowed to add new subscriber.

My colleague, Hiren, and I were working on connecting SQL Server on an Azure Virtual Machine to an Azure Search service. We encountered a roadblock when we wanted data change tracking to be enabled not directly on the Server, but as part of the dacpac deployment.

Change tracking is method Azure Search use to keep track of the data it need to sync, delete, or update. Users can decide to manually run a sql script that will allow change track both at the database and table levels, but this process needs to be done on all environments and be remembered on future environment creation. As best practice it is not suggested. If you have a SQL Database project, it is recommended to apply all the changes there do the dacpac that generate the script needed to be ran on all environment.

Thanks to an article by Ray Barley, a former colleague, it gave us light.

First, we need to enable the SQL Database change tracking which is by default disabled. Please note that for the instructions below I’m using Visual Studio 2017 with SQL Server 2016 for this demo.

Right Click on the SQL Server project in Visual Studio and go to properties then click on “Database settings” button:

On the Database Settings popup, go the “Operational” tab and scroll to the “Change Tracking” section. You can now enable change tracking:

Now that you have change tracking enabled at the database level, let’s select the table we would changes to be tracked and do the same thing at the table level:

With many Azure services available, it may be sometimes hard to know what is the right service for the specific problem you are trying to solve. Service Bus queues and topics can be confused with Azure Storage Queue, even though they each solve a specific problem. Microsoft Documentation outlines some of the technical differences such as message size, push-style, but I wanted to share my experience deciding between Azure Service Bus and Storage Queue.

Similarities

Service Bus (queue/topic) and Storage queue are both queue system that stores messages to be retrieved by a client or receiver. The sender can be web application that would like to notify other apps that, for instance, an order has been submitted or a message was posted ready to be processed. The Service Bus or Storage Queue has one or more receivers that’s ready to process messages from the queue. In our example, the receiver can notify the shipping application that an order is ready to be shipped or email recipient the status of their order. These two services can also have multiple receivers hence load balancing the messages. In other words, if you want your messages to be processed as soon as possible, you have the option of spinning up many instances of your receivers so that messages can be retrieved and deleted as soon as it is picked up from the queue. When then should one use Azure Service Bus and when to use an Azure Storage Queue?

Differences

From an architecture perspective, a Storage Queue receive is usually a Windows service that loops to check if the queue has a new message ready to be processed. Your architecture needs to allow room for a Windows Service. Azure Service Bus does not need a separated Windows service to run. Service Bus receiver has an event pump messaging system wrapped up in a callback or handler that is thread safe. It can be register with your web application on startup, then it’ll be automatically spin up a thread checking if a new message has arrived. With just few lines of code, you are can be ready to send and receive messages. One thing to note, you have to be carefully that your receiver logic does not so much heavy resources (CPU and Memory) that it starts to compete resources with the web server requests. This was the case of one of my clients. Trying to extract data from PDF with some heavy computational algorithms is not a good option here, we decided to move the code to a Windows Service where it needed 128GB of memory and 12 CPU cores.

Furthermore, you may have a business requirement that messages should be processed in order. With Azure Storage Queue, the order messages are processed are not guaranteed, whereas with Service Bus you can enable session and truly guarantee a First In First Out order (FIFO).

Also, Service Bus introduces the notion of topic and subscription. The topic is your sender and subscriptions are the receivers that filter which message to process. For example, you have a workflow scenario where the admin enters new hires in your your system and you have some business requirements to process new employees from Europe and Asia through a different sets of logic from tax, healthcare, retirement. One can setup three Service Bus receivers that subscribe when a new hire is registered: European hires, Asian hires, and other hires subscribers. This feature is not available in Azure Storage Queue.

As you can see, I’m an advocate of Service Bus. However, so far the only scenario that I had to switch to Storage Queue was a storage constraint. Service Bus Queue/Topics can only hold a total of 80GB messages where has a Storage queue 500TB. In situation when you need storage because your receivers can be offline for days, Storage Queue can definitely be a solution.

If you’ve encountered a different scenario where you had to switch from implementing a Storage Queue to a Service Bus or vice versa, please feel free to share!