Cloud Computing in Today's World

Tag Archives: Service Fabric

Today, in part 3 of this series, I wanted to discuss some of the options available when deploying Service Fabric Cluster. In part 2 we presented an overview of the Azure Service Fabric. If you haven’t done so already, you may want to start at the beginning of this series which will help build the foundation of today’s topic.

Deploy Service Fabric Cluster

Note: You must have enough cores available in the region where you plan to deploy your service fabric cluster. You can verify this by using the command below and inserting the value for the region you plan to deploy to.

Get Core Count

PowerShell

1

Get-AzureRmVMUsage-Location'West US'

In my subscription I am only using 6 out of 100 cores available for this region. If you do not have enough cores, open a support request and increase your core quota. Make sure you select Resource Manager for the deployment model and specify the region.

When creating a Service Fabric cluster you have a few different deployment options available to you:

Setup a Service Fabric Cluster using the Azure Portal

Setup a cluster using an ARM template

Setup a cluster using Visual Studio

Setup a cluster in what is referred to as cluster anywhere

Setup a cluster using a Party Cluster

Setup a cluster on your development machine

Setup a cluster using PowerShell

As you can see we have many options to choose from. I will not be covering all of these in this post but plan to cover some of these in future posts.

Let’s cover the first option and deploy a Service Fabric Cluster using the Azure Portal.

Sign into the Azure Portal, if you haven’t done so already.

Click on +New, type “service fabric” and press Enter. In the Everything blade, click Service Fabric Cluster and select Create.

In the Service Fabric Cluster blade we need to provide details for the deployment. Give the cluster a name which must be between 4 and 23 characters and contain lowercase letters, numbers, and hyphens. Select your subscription and location, and choose to create a new resource group. Doing so will help in lifecycle management and billing.

In the Node Type Configurations we will select the number of node types, VM size and number of VMs to include in the cluster. We can have multiple node types, (ie. if we wanted to specify different size VMs and properties) but for this deployment we will just stick to one. For the VM size we will accept the default which is Medium (Standard A2). Enter a Node Type Name and choose the number of VMs to include in the cluster. The minimum number of VMs is set to 5 and is a requirement for the first node type, however this can be scaled up or down at a later time. For Application input endpoints enter the ports to open for your application. These can be added later as well. You can leave the Placement properties at the default for now and add additional name/value pairs for constraint if needed. Add a User Name and Password to use for the VMs.

For the Security Configuration, as this is a test environment, I will choose “Unsecure”. In a production environment you would want to use “Secure” to prevent unauthorized access. At a high level you would acquire a certificate, create an Azure Key Vault and upload the certificate to it, and provide the Source Vault, certificate URL and thumbprint during creation of the Service fabric Cluster. Optionally you can provide the details for the Admin Client and Read Only Client.

Under Diagnostic Settings, Support logs are enabled by default. This setting is required for the Azure Support team to resolve support issues. Application Diagnostics is disabled by default but can be enabled if desired.

Leave the Fabric Settings at default, review the Summary and click Create to start the deployment.

Deployment Status

The deployment took about 14 mins to complete. Now that its complete let’s look and see what was deployed. If we click on our resource group we see that there is a summary of the resources. Here is what we have as a part of our Service Fabric Cluster deployment.

We have a VM Scale Set called WebFE that has a capacity of 5 nodes labeled 0-4. VM Scale Sets are an Azure Compute resource that are used to deploy and manage a collection of virtual machines as a set.

We have a load balancer that has a public IP address assigned to it, 5 load balancing rules (the 3 we added for ports 80, 83, 8081, and 2 others that are added automatically for management operations; 19000, 19080), 5 probes configured for checking the health to see if the LB should continue to send new connections, and 5 inbound NAT rules for RDP to connect to each node as needed.

We have our public IP resource for the LB and the virtual network with 2 subnets.

There are 3 storage accounts; 1 blob service that stores the vhds, 1 table service for diagnostics, and another table service for log files.

Last but not least, we have our Service Fabric Cluster where we can see the health status of the nodes and applications (currently there are no applications deployed yet). Another important item here is the Service Fabric Explorer link.

Clicking on the link opens another browser tab which brings us this really nice portal for visualizing the cluster. Right away, from the Essentials menu, you get an easy to look at overall dashboard view of the cluster and its health. Clicking on the Details menu brings you Health Events, Load Information, and Upgrade info. Clicking on the Cluster Map menu we can see the Fault/Upgrade domain info for the cluster. Clicking on the Manifest menu will show the cluster manifest that was generated and used during upgrades.

Under the Cluster tree menu we have two subtrees, Applications and Nodes. Drilling down into Applications and into System we can see there are 5 services running. Those being:

ClusterManagerService

FailoverManagerService

ImageStoreService

NamingService

UpgradeService

Under each service is a partition showing each replica contained within it. You can see that node 2 is currently listed as Primary with the others being ActiveSecondary. Additional applications installed will show up here along with their stateless and stateful services, partitions, and replicas.

Drilling down into Nodes we can see all 5 nodes that make up the cluster and the details around the health state, status, upgrade/fault domain, IP address, unhealthy evaluations, and deployed applications. On the far right there is an Actions menu where you can Activate, Pause, Restart, Remove data, and Remove node state.

This has been a brief overview of a cluster just after deployment. For additional info on Service Fabric Explorer click here.

Now that the cluster deployment has completed, you can connect to your cluster and deploy your applications.

Connecting to the VM

To connect to an individual VM in a Scale Set, you can use either the DNS name or IP address of the Public IP address resource that is associated to the load balancer.

The NAT rules defined during the deployment define an incoming port range starting at say 3389 to port 3389 of the first VM, an incoming port of 3390 to port 3389 for the second VM, and so on. You can view these by clicking on the Inbound NAT rules setting of the load balancer resource.

Unfortunately this is the default setting when deploying through the Azure Portal. If you wish to change these port ranges you will want to download the json template prior to deployment and edit the resource for loadBalancers>inboundNatPools>frontendPortRangeStart and frontendPortRangeEnd values, then perform the deployment using the modified template.

I hope this helps to get you started deploying and testing Service Fabric. Initially this series was planned for 3 parts but I soon found there is just too much to discuss. So stay tuned for my next post as we continue to explore further.

Today, in part 2 of this series, I wanted to continue down the path of exploring Azure Service Fabric by presenting an overview of Azure Service Fabric. In part 1 of this series we talked about why you would want to use something like Service Fabric. That led us to a primer discussion around microservices which just so happen to run on the Service Fabric platform. If you haven’t done so already, you may want to read part 1 of this series first as it will help you build a better understanding.

My Journey to Service Fabric

Recently I was working on a project where the customer had an application that was hosted in Azure. They were utilizing the Cloud Services platform but soon came to terms that it may not be the best option for their application. With Cloud Services there are some limitations around being able to scale an application, primarily dependent upon how the application was architected. They soon realized that they needed a solution where their application workload would be scalable, reliable, and manageable. They had already begun to start working on a microservices approach for the application so it was only fitting that they explore Service Fabric. Hence, why I decided to write this blog series. I figured I could help others down the Service Fabric road to becoming DevOps Unicorns. So let’s see what Service fabric is all about.

What is Service Fabric

Service Fabric in its most simplest form is a platform for running microservices. It can’t be that simple right? Well, it is quite simplistic but there is much more to it than that. Service Fabric is a distributed systems platform that simplifies the packaging, deployment, and manageability of scalable and reliable microservices. This platform significantly changes how developers and administrators approach the deployment and management of mission-critical workloads. No longer do they have to deal with building and managing complex infrastructures. Instead they can focus their efforts on their application, increasing productivity and customer satisfaction.

Service Fabric itself is a shared pool of servers known as the Service Fabric Cluster. The cluster is made up of multiple nodes that host distributed microservices, those being stateful or stateless. By using microservices we have the ability to scale services independently of each other which allows developers to push out updates and fixes more frequently, not to mention reducing the chances of a broken service affecting the entire application. In addition, Service Fabric provides application management capabilities for provisioning, deploying, monitoring, upgrading, patching, and deleting of services. From a natural progression of application hosting we have gone from a physical server, to a virtual machine, to a container, to microservices. So whats next? Nanoservices? There is a concept of an application being too fine-grained and requiring more effort to stitch the services together but we won’t worry about that now. The image below shows a good representation of Service Fabric with many microservices deployed on top. Not only can Service Fabric be deployed in Azure, but it can be deployed in other clouds as well as on-premises.

Traditionally stateless applications required a database of some sort to maintain the state along with caches and queues to address latency. One of the problems with this is that it required more components to manage that didn’t scale really well together. In addition, each additional connection added to the latency. By using stateful microservices we remove the need for those additional components and can maintain high-availability and low-latency by keeping the application code close to the data.

Within Service Fabric there is support for application lifecycle management (ALM) from development, through deployment, during management, and decommissioning. By utilizing packaged microservices, multiple instances can be deployed and upgraded independently. Rolling upgrades can be performed to ensure the application is always available. In the event an upgrade fails, automatic rollback will kick in.

Additional Capabilities

self-healing applications

run Service Fabric on your laptop as a dev environment; same code runs in Azure

services can be built using framework of your choice

applications deploy in seconds

write once, deploy anywhere

deploy on Windows Server or Linux (coming soon)

deploy hundreds or thousands of applications per machine

manage using .NET APIs, PowerShell, or REST

monitor and diagnose the health state

scale up or down the cluster, capable of scaling to thousands of machines

redistribute and optimize load after failure

How Service Fabric Works

In Service Fabric you have a cluster of nodes running a Windows service called FabricHost.exe that auto-starts upon boot. This service starts Fabric.exe and FabricGateway.exe which make up the node. Note: When running on your laptop as a dev environment you essentially just run multiple instances of these executables which show up as multiple nodes. An application package that references the service packages for each service type are copied to the image store. Once the package has been copied you can create an instance of the application within the cluster by specifying the type. The application type instance is then assigned a URI name of “fabric:/MyNamedApp“. Within the cluster you can create multiple named applications, each of which are managed and versioned independently. Named services instances are created under its named application which looks like “fabric:/MyNamedApp/MyNamedService“. During the named service creation you specify a partition scheme which spreads across cluster nodes allowing for scale. Within a partition stateless named services have instances while stateful named services have replicas which are kept in sync. Should a replica fail, Service Fabric builds a new replica from existing replicas. In addition, during named service creation, code, data and configuration packages are copied to the node(s) running the service where they can be used. You can create two types of service:

Stateless: use when persistent state is not required or using external storage

Stateful: use when persistent state is required. Uses Reliable Collections or Reliable Actors programming models.

Service Fabric Architecture

The diagram below shows the major subsystems of Service Fabric with a brief description of each.

Hosting and Activation – manages lifecycle of an application on a single node

Application Model – enables tooling

Native and Managed APIs – exposed to devs

In Summary

Service Fabric is being termed the next-gen platform for building and managing cloud-scale applications. If you are interested in seeing how this works or even deploying Service Fabric, stay tuned for the next post in this series where we will Deploy Service Fabric.

This is the first in a series of posts about Exploring Azure Service Fabric. My goal here is to give you an idea of why you may want to use Service Fabric, provide an overview of the service itself, and demo a few deployment options. Below is a list of posts that I plan to include in this series so stay tuned if you find this of any interest..

In this first part of the series I wanted to talk about some of the driving factors for using something like Service Fabric. If you are reading this you probably are already familiar with the term microservices. If not, Wikipedia defines the term like so. “Microservices is a software architecture style in which complex applications are composed of small, independent processes communicating with each other using language-agnostic APIs. These services are small, highly decoupled and focus on doing a small task, facilitating a modular approach to system-building.”

Over the last few years microservices have emerged to replace the traditional tiered architectures and monolithic applications that are tightly coupled. So why is this? Think back to the days when you needed to update your application with a new feature, or worse, you hit the boundaries of your application and needed to scale up or out. It was no easy task and it caused updates and fixes to build up until a new version of the application was released. As a business, if you weren’t able to evolve your application to better suit your end users, then it slowly faded away as people stopped using it. With today’s technologies, when we talk about building applications, you need to consider the scope. Most conversations lead to the cloud in some form or fashion so we need to ensure that the applications are designed for scale and capacity. With scale and capacity being unpredictable, the cloud becomes an ideal platform for running our applications. If you need to scale it becomes much easier, provided that your application has been designed as a collection of fine-grained autonomous services that have their own lifecycles and collaborate together. This allows you to get feature updates and enhancements to your customers at a much faster pace all while continuing to improve your application based on customer feedback. This is what is known as the lean startup model wherein you start with an idea, build (code), measure (data), and learn (pivot if needed). This in turn creates more reliable releases. Does any of this sound somewhat like the concept of DevOps? Sure it does, after all our goal is to accelerate the flow of work through dev, test, and IT Ops in a way that speeds application delivery and increases efficiencies.

With that said, it isn’t as simple as flipping a switch and you go from a monolithic app design to a microservices app design. In some cases you may need to start with a monolithic approach and slowly move towards a microservices design. This sort of approach would allow you to slowly decompose the app starting with lower tier services that need to be more scalable, all while moving towards a microservices approach. Additional benefits of microservices include language-agnostic, interaction with other microservices, resilient to failures, and self-healing services. The end goal of a microservices approach is to compose your application into smaller, autonomous services running in containers across a cluster of machines. This allows smaller teams to focus on a service that is independently tested, deployed, scaled, and upgraded. Can you start to see how all of this fits together, the DevOps culture, the lean startup model, and microservices.

Microsoft was used to delivering applications in a monlithic form but soon realized they needed a way to deploy large scale services like Azure SQL databases, DocumentDB, along with other core services, that allowed for independent team development, was scalable and reliable, with low latency. This is where Service Fabric comes in. Essentially it is a platform for running microservices. We will get into more of its capabilities in a part 2 of this series.

An important concept that I want to cover, before we move on, is without a framework of principles and practices to guide your design, it will be quite difficult to deliver and maintain a microservices environment. You and your teams will want to review these and put together a framework covering these principles. Doing so will help you achieve success.

Strategic Goals: high-level goals defining where the company is going so the technology is aligned to best suit the customer (ie. expand into European and Asian markets)

Principles: rules defined that align with what you are doing from a strategic goal. Keep it small – no more than 10 so they can be remembered (ie. allow for portability)

Practices: how to ensure the principles are carried out. Detailed guidance for performing tasks (ie coding and design guidelines)

Note: For smaller teams you may end up combining principles and practices.

Being able to articulate this in the form of a chart to summarize these concepts will be most useful and readable. This could be something as simple as the below diagram.

For additional information on this topic check out the book “Building Microservices” by Sam Newman.

Stay tuned for the next post in this series where we will dig deeper into an overview of Service Fabric and its features.