In this blog, I will show how you can use Application Insights to monitor your web application written in Python using the Flask framework. The sample code can be found here. To follow along, you should have access to an Azure subscription. Also your machine should have the following:

Git client

Code editor such as Visual Studio Code

Python 2.7

Setting-up Application Insights

Login to Azure Portal then click on Create a resource

Click on Development tools, then select Application Insights

Enter the app insights details as shown below then click create

Once the resource is created, go to AppInsightsRG and click on pythonapp.

In Essentials copy the Instrumentation Key and keep it handy. We will need that later when we start instrumenting our code.

Downloading the Sample Application

The sample application we will be using for this demo can be cloned from here. Once the code is downloaded from the repository, you can run it by typing:

python .\runserver.py

Instrumenting your Python Application

Install the latest Application Insights Python SDK by running the following command:

pip install applicationinsights

Once the SDK is installed place the snippet below at the bottom of __init__.py file. Make sure you replace '<YOUR INSTRUMENTATION KEY GOES HERE>' with the key you copied in a previous step

Restart the application, launch a web browser and go to the site and click on the various links on the page. Wait a few minutes, and go to the Application Insights instance, refresh and you should see HEALTH metrics updated:

At this point, we are able to send telemetry from the server. However, notice that PAGE VIEW LOAD TIME metric is not updated. The reason for that is we are only sending information from the server side. To get a more complete picture we need to also get the client side to send telemetry to our Application Insights instance. For that, we will need to inject code in the client side of our web application.

Under templates folder, open layout.html. in the <head> section of your html template, copy the following JavaScript snippet. Again, make sure you replace '<YOUR INSTRUMENTATION KEY GOES HERE>' with the key you copied in a previous step

Once the client side code is injected, save the file and go to your browser, refresh the page and perform a few clicks. Wait for a few minutes and go to your Application Insights instance, refresh, and you should see the PAGE VIEW LOAD TIME metric updated.

Sending Custom Events

To send custom events from your application, place this snippet at the bottom of views.py file

Also create a control in your main page that would trigger the custom event. Open index.html under template folder and look for <div class="row"> tag. Replace the content of that section with the following:

Click on the Custom Metric button a few times then go to your Application Insights instance. Under Metrics Explorer, click on Add new chart and on the right make the selection as shown below:

The chart on the left should show the data related to the custom metric sent from your application

Reporting Handled Exceptions

To show an example of handled exceptions sent to Application Insights, let’s create a route that raises random exceptions. Also we will inject the code that will report those exceptions to Application Insights. Add the snippet below to your views.py file:

Click on the Handled Exception button a few times then go to your Application Insights instance. Under Metrics Explorer, click on Add new chart and on the right make the selection as shown below:

The chart on the left should show the exceptions sent from your application

Wrap-up

We have just shown how you can send telemetry data from a Python application that uses Flask framework. You can get the complete code from a branch called instrumentedhere. Note that the complete code is refactored so it might not look exactly the same as the steps described above.

I hope this blog was useful. Please leave your feedback on how future blogs can be improved.

If you are building a modern application and are following modern design principles, there is a good chance your application is composed of a number of layers and services. Also, your services might be communicating with one or more databases to persist their data. For services to communicate with one another and to be able to talk to databases, they need to leverage information that is considered sensitive. Hackers know too well that this is one area that many teams don’t properly protect and they are always on the lookout to get hold of that information to leverage for attacks. Sensitive information in this case include:

Database username and password or connection strings

API keys

Authentication (i.e. OAuth) tokens

Third party service username and password

Any other sensitive information your app might need

A common practice to handle this kind of information is to put it in a configuration file somewhere. Although the file will be replaced when it goes to production, this still not recommended for at least the following reasons:

After the config file is checked in to the version control system, the sensitive information would be exposed

When information changes, a deploy might be required for the new changes to take effect

Secrets for production environment will need to be stored somewhere where they are properly protected. Sometimes that’s left to the judgement of the secrets maintainer, which might result in secrets being compromised if they are not kept in a secured location

Maintaining lifecycle of these secrets might not be easy since they might be scattered all over the place

In the remainder of this blog, I will detail an approach that not only help properly store and maintain the lifecycle of your secrets but also how your application can get access to those secrets without being exposed outside of the application.

Approach Overview

In this walkthrough, I will show how secrets can be securely stored in Azure using a capability provided by Azure called Azure Key Vault. I will also show how those secrets can be accessed from an application. The application I will use for this example would be a .NET MVC Web Application. Finally, I will create a build in VSTS that would include a deploy step that will deploy the application and supply it with the sensitive information – in this case database connection string.

Enable your web app to communicate with Azure Key Vault

Create a sample project

In this example, we will use the application created by one of Visual Studio Enterprise 2015 templates. Here are the steps to create a sample project:

Using Key Vault to Store Secrets

Go to mydemoRG and create Key Vault and give a unique name. In my case I named it

Once created, click on it and go to Overview => Secrets => Add

for upload option choose Manual and for name enter connectionstring and for value field set it to the connection string of the database created earlier. To obtain the connectionString from database created earlier, click on the database and then go to Overview and then click on Show database connection strings. Make sure {your_username} and {your_password} are replaced with the actual values for your environment before copying to the value of the secret. Once done, you click create.

Go back to the key vault you created and click Overview then Principals

Click Add New

In Select principal type the name you registered earlier with Azure Active Directory

For Secret Permissions, check Get

Click Ok and then Save

Build and Deploy app using VSTS

To build your app using VSTS, you will need to create a Team Project and then push your code to VSTS repository. Once code is pushed, you can create a build using Visual Studio template with the following steps:

In Build Solution step, ensure that MSBuild Arguments field is set to:

In Azure App Service Deploy, make sure that the step is pointing to a valid Azure subscription by properly setting Azure Subscription field. App Service name should be set to the name of the web app created in the previous section.

In Azure PowerShell Script step make sure that Script Path field is pointing to SetAzureWebsite.ps1. In Script Arguments field enter:

your_client_id and your_client_secret are the values you obtained when registering your application with Azure Active Directory

your_keyVault_connectionString_url can be obtained by going to the Key Vault we created earlier, then clicking on Secrets then connectionstring secret we created earlier then select current version and then Secret Identifier. Here is an example of this value:"https://myappvault.vault.azure.net/secrets/connectionstring/b546a57aa8eb454f8713007063c2f12f">https://myappvault.vault.azure.net/secrets/connectionstring/b546a57aa8eb454f8713007063c2guid"

Once done, queue a new build. When the build is done, navigate to the web app and ensure that application is running correctly.

When dealing with deploying a large number of components in Azure, a single ARM template might be challenging to manage and maintain. ARM linked templates allow you to make your deployment more modular making the templates easier to manage. When dealing with large deployments, it is highly recommended to consider breaking down your deployment into a main template and multiple linked templates representing different components of your deployment.

Deploying ARM templates can be performed using a number of methods such as using PowerShell, Azure CLI and Azure Portal. A recommended approach however is to adopt one of DevOps practices, namely Continuous Deploy. VSTS is an application lifecycle management tool hosted in the cloud and offered as a service. One of the capabilities VSTS offers is Release Management.

In this blog I will detail how you can deploy linked ARM templates using Release Management feature of VSTS. In order for the linked templates to be deployed properly, they will need to be stored in a location that can be reachable by Azure Resource Manager. A location that fits the bill here is Azure Storage; so we will show how Azure Storage can be used to stage the ARM template files. I will also show some recommended practices around keeping secrets protected leveraging Azure Key Vault.

The scenario I will walk through here is to deploy VNet with a Network Security Group (NSG) structured as linked templates. We will use VSTS to show how Continuous Deployment can be setup to enable teams to continuously update Azure with new changes each time there is a modification to the template.

Creating Azure Storage Account

Login to Azure portal and create an Azure Storage account following the steps documented here. Specify the parameters as shown below. Make sure the Name is unique.

Once deployment is done, go to the storage account and click on Shared access signature then click on Generate SAS and connection string. Copy the SAS token generated and keep it handy as we will use it later

Go to the storage account Overview page and then click on Blobs

Add a Container called “armartifacts” as shown below

Once the container is created, click on it and go to Container properties. Copy the URL field and keep it handy. We will need it later as well

Protecting Secrets with Azure Key Vault

In the Azure portal, create an Azure Key Vault resource

Click on the Azure Key Vault you just created and click on Secrets

Click on Generate/Import to add the SAS Token

For name, enter “StorageSASToken” and enter the Azure Storage shared access signature key you copied in a previous step to the Value field

Click Create

Linking Azure Key Vault to VSTS

Login to your VSTS account. If you don’t have one, you can create one for free here

Go to Build and Release hub in VSTS and click Library

In Variable Group name field enter “AzureKeyVaultSecrets”

Toggle “Link secrets from an Azure key vault as variables”

Select your Azure subscription and then the Azure key vault you created earlier and click Authorize

Once authorization is successful you can add variables by clicking “Add” and you should be presented with the option to add references to the secrets in the Azure key vault. Once you added the reference to the StorageSASToken, click Save.

Set-up Continuous Deployment using VSTS

At this point we should have everything in place to deploy our linked template to Azure. To setup the deployment pipeline, login to VSTS and click on Build and Release then Releases then create a release definition. Select an empty process and name your environment Production. For artifact, point to your GitHub account

Click Variables then Variable groups then click on Link variable group. Select the AzureKeyVaultSecrets variable group that you created in an earlier step

Add a variable called AzureBlobStorageURL and paste in the value from the url you copied earlier when you created the Azure Storage account

Add a variable called blobContainerName and put the name of your Azure blob container

Add a variable called StorageAccountName and put the name of your Azure storage account

In the release tasks, create the following steps:

Once steps are added, click save and kick-off a release. The release should finish successfully

Check Azure and you should see the VNet with NSG created

We have just showed how you can break down your ARM template and make it more modular by transforming it into Linked ARM template. We have also walked through how you can use VSTS to continuously deploy your linked template and enabling the release process to read the SAS key from Azure Key Vault while deploying the template.

I hope you found this blog useful. Please leave feedback on how this can be made better.

In this blog, I will detail the steps to deploy a containerized ASP.NET Core Web api app into an OpenShift Kubernetes cluster. The reader of this blog is assumed to have basic knowledge on the following topics:

Containerization with Docker

Cluster management and container orchestration using Kubernetes

Team Foundation Server (TFS) or Visual Studio Team Services (VSTS)

To reproduce the steps detailed in this blog, the following tools are needed:

Visual Studio 2017 Enterprise Edition

Visual Studio Team Services account. You can signup for a free VSTS account here

The first part of this blog will go over how to create a sample ASP.NET Core web application with Docker support that we will use as our demo app to deploy to the Kubernetes cluster. Then we will go over how VSTS can be used to create a CI build that will build the application, package the build output into a Docker image, then push the image to Docker Hub. After that we will point you to resources that will show how you can create a test OpenShift Kubernetes cluster. Finally we will go over how VSTS Release Management can be used to continuously deploy to the OpenShift Kubernetes cluster. As you might have guessed, this might not be easy to setup. Luckily the Continuous Integration (CI) and Continuous Deploy (CD) aspect of this is greatly simplified by VSTS as you will see later. So let’s get to work…

Setting up CI for an ASP.NET Core app using VSTS

In this section, I will show how you can create a sample ASP.NET application with Docker support. I will also walk through how to create a Continuous Integration (CI) build for this app using VSTS. First we will need to create a VSTS team project. To do so, browse to your VSTS instance and create a team project. You can signup for a free VSTS account here

Publish the newly created repository by going to Team Explorer => Sync

Under Push to Visual Studio Team Services, click Publish Git Repo.

Select your VSTS account and click on Advanced then pick the team project you created earlier

Click Publishrepository

Now you should have the code published in your VSTS instance. You can verify that by logging in to your VSTS team project and browse to Code:

Creating Continuous Integration Build using VSTS

Go to Build and Release and select Builds then press New Definition button

You will be presented with a list of templates. Scroll down and select Container (PREVIEW) and click Apply

Configure the build step as follows:

For Agent queue select Hosted Linux Preview

Click on Get Sources Step and make sure your repo is selected along with the targeted branch

Add .NET Core step three times and configure each task as follows.

The first task, set the Command field to restore and project(s) field to **/*.csproj

The second task, set the Command field to build and project(s) field to **/*.csproj and Arguments field to -c Release

The third task, set Version to 2.* (preview), the Command field to publish and project(s) field to **/*.csproj and Arguments field to -o ./obj/Docker/publish

Click on Build an image step. You can accept the default values for that step. For Image Name, make sure you qualify the image name with your Docker repo i.e yourrepo/name:version. Otherwise the build will fail.

In order, to push to your Docker Hub repository, you will need to configure a Docker Registry connection. To do so, click on Manage, this will take you to a page where you can add the connection to your Docker Hub

Click New Service Endpoint and the select Docker Registry

In the Docker Registry form select Docker Hub as registry type and enter a connection name and your Docker ID and password as shown below. If you don’t have a Docker Hub account, you can create one for free here. Once information is entered, click on Verify Connection to ensure the information entered is correct. If connection test is successful click OK to save the connection.

Go back to the Push an image refresh the Docker RegistryConnection and select the connection you created in the previous step. You can leave the default values for all the other parameters

For Image Name field, make sure you qualify the image name with your Docker repo i.e yourrepo/name:version. Otherwise the build will fail.

Go back to the process tasks and add a task called Copy Files and configure it as shown below

Next add a task called Publish Artifact and configure it shown below

Click on Triggers tab and ensure that Enable continuous integration is checked

Click on Save & Queue.

Click Save & Queue in Save build definition and queue window. This will kick-off a build.

The build should finish successfully

Setting up OpenShift Environment

To get access to an Openshift that you can use to follow these steps in this demo, follow the steps in this resource.

browse to "https://your_url/api/values". Make sure your_url is the url returned from previous step. In my case, the complete address is https://sampleapp-deployment-sample-project.192.168.99.100.nip.io/api/values

Setting up Continuous Deployment

To deploy the Docker container created by our CI build to the OpenShift Kubernetes cluster, we first need to check-in the deployment config file samplewebapp-oc-deploy.yml we created earlier. But before we check it in, replace the image tag with a placeholder as shown below that we will replace at deploy time. Once change is made, check-in the file into your repo.

To enable Continuous Deploy, click on the trigger button and toggle the Continuous deployment trigger to enabled

Click on Tasks and add the following tasks to go to the tasks for that release definition

Click the + sign to add tasks to the Agent phase

For the first task, we want to replace the imagetag place holder with the real image tag. For that, click Utility then add Replace Tokens task. This task can be found in Marketplace in case you don't already have it

Configure the task as shown below. For Root directory, ensure you navigate to where the yml file is located:

Go to Variables tab and add IMAGETAG variable as shown below. Make sure you replace "Your_Repo_Name" with your Docker repository name

Go back to release tasks and add Command Line If you can't see the task you can install it from VSTS Marketplace.

Configure the task as shown below. Ensure that your_openshift_url, your_username, your_password are replaced with values from your environment

Add another Command Line task and enter the following code oc apply -f samplewebapp-oc-deploy.yml. Make sure that you set Working folder to where the yml file is located as shown below

Click on Agent phase and for Agent queue, select an agent that is running in a machine that has oc utility installed. For information on how to deploy a build and release agent, refer to this resource

Name your release definition, save and then kick off a release

Make a change to your application. When you commit that change, a continuous integration build will be kicked off

Once the build is done, the build will be released. Once done, you should see your changes deployed to Openshift Kubernetes cluster

I hope this was informative. Please let me know if one of the steps is unclear or you know a better way of doing anything I described in this blog. Your feedback will be greatly appreciated. Happy deploying!!

As companies embark on their digital transformation journeys, pressure on IT organizations have been mounting to levels that have never been experienced before. Businesses have the following expectations, among others, from their IT organizations:

Applications need to be developed, deployed and enhanced at a rapid pace

Applications are always available, resilient and performant

Features expected to match or exceed competitors

Applications need to run on different form factors including PCs and mobile

All applications delivered must be secure and compliant

To meet those expectations, not only do companies need to have the capabilities to build these kind of solutions, but also they have to build them faster than ever before. This is why many organizations are rethinking how they are architecting and building solutions so that they can better respond to the demands and expectations businesses have on them. Also, IT organizations are constantly in the lookout for ways to enhance agility, optimize resources consumption and minimize solutions time to market. One way businesses are achieving those goals is by embracing the cloud. Trends show that organizations from all sizes either have moved toward scaling down their on-prem data centers in favor of the cloud or contemplating the adoption of the cloud.

A common concern most organizations have while contemplating the adoption of the cloud is how to approach their legacy apps. When thinking about moving a legacy application to the cloud, a decision has to be made whether the application should be moved as is, a strategy called “lift-and-shift” or it has to be redesigned and transformed to become cloud native. Furthermore, organizations that choose to transform their applications often undertake the effort to modernize the application architectures as well as embrace a DevOps mindset to enhance agility, reliability and productivity.

Recently I went through an effort to modernize a monolithic n-tier eCommerce application. At a high level, the application architecture is depicted below:

This effort included moving the application to the cloud as well as redesigning it to follow a Microservices architecture. Also, part of this effort was to modernize the application components and take advantage of Azure PaaS offerings whenever applicable. The final design we decided upon is depicted below.

­­­

In this blog I will go through each application layer, describe aspects that need to be considered to properly implement it and offer a solution to address it using Azure.

External Facing Components

Web Portal

This component is the entry point to the system. Because this component is external facing, the following aspects have to be considered to implement this capability

Availability: users should be able to access the system at any time. The system should also be resilient to handle application as well as infrastructure failures. Also, there should not be any single point of failure in the system. If a component in the system fails, the system should continue to respond to requests that are targeting other components in the system

Scalability: as the user base grows, the system should continue to honor its SLAs.

Security: only authorized users should be allowed to login and access the system. Also data can only be accessed by users that have the proper permissions to view and manipulate it.

Maintainability: because this component is the entry point to the system it is paramount that mechanisms be put in place to enable easy deploys and near-zero downtime updates

The following Azure services can be used to address this aspect of the solution

Azure web App

Traffic Manager

Azure Service Fabric

Azure Container Service

Azure Active Directory (for authentication and authorization)

Content Delivery Network (CDN)

Authentication and Authorization

Azure Active Directory B2C is a consumer identity and access management in the cloud that is highly available, and that scales to hundreds of millions of identities. It can be easily integrated across mobile and web platforms. Users can log on to all your applications through fully customizable experiences by using their existing social accounts or by creating new credentials.

Furthermore, Azure AD B2C addresses all the capabilities listed above and more. The following are some of its capabilities:

Multi-factor Authentication

Self-service Password Management

Role Based Access Control

Application Usage Monitoring

Rich auditing and Security Monitoring and Alerting

Content Delivery Network (CDN)

Content Delivery Network (CDN) is a group of distributed systems used to improve websites performance by serving website content (i.e. images, scripts, videos, …etc.) from locations that are geographically closest to where requests are made. If users are expected to be geographically dispersed, a CDN must be used to enhance the responsiveness of the site and improve site usability. Furthermore, the use of a CDN reduces traffic sent to the origin since a subset of the requests will be handled by the CDN edge servers.

Azure CDN can be used for this component. It offers a global solution for delivering high-bandwidth content that is hosted in Azure or any other location. The Azure CDN cache can be held at strategic locations to provide maximum bandwidth for delivering content to users.

Service Gateway

Azure API Management is a turnkey solution to publish APIs to external, partner and internal developers to enhance agility, efficiency and usability. It accomplishes that by offering the following capabilities:

Expose all APIs behind a single static IP and domain

Get near real-time usage, performance and health analytics

Automate management and integrate using REST API, PowerShell, and Git

Provision API Management and scale it on demand in one or more geographical regions

Self-service API key management

Auto-generated API catalog, documentation, and code samples

OAuth-enabled API console for exploring APIs without writing code

Sign in using popular Internet identity providers and Azure Active Directory

Client certificate authentication

Simplify and optimize requests and responses with transformation policies

Protect your APIs from overload and overuse with quotas and rate limits

Use response caching for improved latency and scale

Internal Facing Components

Domain Services

The recommended implementation option for this layer is REST based APIs that expose services to perform a well-defined function (Bounded Context) in the system. The services need to have their own data layer that should not be shared with other services except through well-designed API calls. Services need to be independent from one another.

When dealing with a system that could encompass a large number of components running on a cluster of machines, management of such infrastructure can be a daunting task. There are a number of orchestrators that try to make this task more manageable. Both Service Fabric and Azure Container Service provide container orchestration capabilities. However, there are key differences between these two services:

Service Fabric

The following diagram shows Microservices using Service Fabric as a deployment target:

Batch Processes

A number of Azure Services can be leveraged to handle any kind of background processing. The following is a list of these services with a brief description:

Azure Batch: a platform service for running large-scale parallel and high-performance computing (HPC) applications efficiently in the cloud. Azure Batch schedules compute-intensive work to run on a managed collection of virtual machines, and can automatically scale compute resources to meet the needs of your jobs.

Azure Functions: a Serverless technology that allows to run small pieces of code, in the cloud without worrying about a whole application or the infrastructure to run it.

Logic App: provides a way to simplify and implement scalable integrations and workflows in the cloud. It provides a visual designer to model and automate your process as a series of steps known as a workflow.

Data Persistence Layer

For data persistence, two technologies are dominating the database landscape, namely NoSQL and relational databases. NoSQL databases have gained popularity because they are easier to scale, allows for faster development, and allows to store unstructured data.

Azure Cosmos DB is a globally distributed database service designed to enable you to elastically and independently scale throughput and storage across any number of geographical regions with a comprehensive SLA. You can develop document, key/value, or graph databases with Cosmos DB using a series of popular APIs and programming models.

For scenarios where data consistency and ACID (Atomicity, Consistency, Isolation, Durability) compliance is important, Azure SQL Database is a high-performance, reliable, and secure relational database-as-a service that can be leveraged without needing to manage infrastructure.

Reporting

Azure offers many services that can be leveraged for reporting. Azure SQL Data Warehouse can be used to load and aggregate data from various data sources to perform analysis and reporting. It is a massively parallel processing (MPP) cloud-based, scale-out, relational database capable of processing massive volumes of data.

To help visualize BI reports, Power BI Embedded can be leveraged to integrate Power BI reports into your web or mobile applications. Power BI Embedded is an Azure service that enables app developers to surface Power BI data experiences within their applications.

One thing to keep in mind is the approach I described above is not the only approach that fits the scenario at hand. There could be other ways to achieve this goal that are valid as well.

I hope this blog was helpful to those who are learning about Azure and those of you who are considering to move some of their workloads to the cloud. Note that you can get started to explore Azure for free. Let me know your thoughts about what has been discussed in this bog and please let me know how I can improve by leaving your feedback.

In my last post, I covered how you can create a simple Web API, run the Web API in a Docker container and then deploy the container to a Kubernetes cluster provisioned using Azure Container Service (ACS) in Azure. You can find the full post here.

In this blog, I will cover how CI/CD can be implemented so that the sample Web API can be deployed automatically as soon as a change is made to the Web API code. To create the Web API code with Docker support, follow the section “Creating a simple web API with Docker Support using Visual Studio” in my last post here. This blog assumes you have a VSTS account setup and a Team Project created. You can create a VSTS account for free here. It also assumes that the sample API code is checked in to the Team Project version control.

Pre-requisites

Some tasks we will be using to set up CI/CD don’t come out of the box with VSTS. Instead, you will need to install them from the VSTS Marketplace. The following extensions need to be installed:

Docker Integration

Kubernetes extension

Setting up Continuous Integration

Login to the Team Project in VSTS where the code is checked-in

Tap Build and Release tab and then click on Builds

Click New definition button

Click on empty process link

Add Docker Compose task from the Docker integration extension installed from Marketplace. Set Docker Compose File field to docker-compose.ci.build.yml. Set Command field to up

Add another Docker Compose task from the Docker integration extension installed. Set Docker Compose File field to docker-compose.yml. Set Command field to build

Add Docker step from the Docker integration extension. For Docker Registry Connection field, click the plus sign (+) and you will be prompted to enter your Docker registry credentials. VSTS uses those credentials to push the generated Docker image. Set Action field to Push an image. Set image name to xyz/sampleapi:latest. Make sure you replace “xyz” with your actual Docker repo name

Add Copy and Publish Build Artifacts task and configure it as shown below

Select Kubernetes Apply Task. Click on Add next to k8s end point field and fill out the information for your Kubernetes cluster with information similar to below

For Kubeconfig field, set it to the content from the config file inside .kube directory generated when the az acs kubernetes get-credentials is run as show in the previous post

Click OK to return to the task

Set the YAML file and kubectl binary as shown below

Click on Run on agent and make sure you select Hosted Linux Preview

Click save icon and queue a release and this should conclude setting up Continuous Deployment.

Note: if you make a change to the code, the image version number will need to be updated so that Kubernetes can trigger a deployment. One way to do this is to include a placeholder in the sampleapi.yml as well docker-compose.yml files and then replace these placeholders with actual version numbers for each build during build time. A task such as Replace Tokens can be used to accomplish this substitution.

We have just walked through how to setup CI/CD for a Web API with Docker support running on a Kubernetes cluster. I hope this was informative.

As companies are continuously seeking ways to become more Agile and embracing DevOps culture and practices, new designs principles have emerged that are more closely aligned with those aspirations. One such a design principle that had gained more popularity and adoption lately is Microservices. By decomposing applications to smaller independent components, companies are able to achieve the following benefits:

Ability to scale specific components of the system. If “Feature1” is experiencing an unexpected rise in traffic, that component could be scaled without the need to scale the other components which enhances efficiency of infrastructure use

Improves fault isolation/resiliency: System can remain functional despite the failure of certain modules

Easy to maintain: teams can work on system components independent of one another which enhances agility and accelerates value delivery

Flexibility to use different technology stack for each component if desired

Allows for container based deployment which optimizes components’ use of computing resources and streamlines deployment process

In this blog, I will focus more on the last bullet. We will see how easy it is to get started with Containers and have them a deploy mechanism of our Microservices. I will also cover the challenge that teams might face when dealing with a large number of Microservices and Containers and how to overcome that challenge

Creating a simple web API with Docker Support using Visual Studio

For this section, I am using Visual Studio 2017 Enterprise Edition to create the Web API.

Select Web API and make sure that Enable Docker Support is enabled and then click OK to generate the code for the web API

Note that the code generated by Visual Studio has the files needed by Docker to compile and run the generated application

Open docker-compose.yml file and prefix the image name with your docker repository name. For example, if your Docker repository name is xyz then the entry in the docker-compose.yml should be: image:xyz/sampleapi Here is how that file should look like:

Check in the generated Web API code into Version Control System. You can use the git repository within Visual Studio Team Services for free here

Checkout the code to an Ubuntu machine with Docker engine installed. In Azure you can provision an Ubuntu machine with Docker using “Docker on Ubuntu Server” as shown below

Run the code by navigating to the directory where the repository was cloned and then run the following command

docker-compose -f docker-compose.ci.build.yml up && docker-compose up

Verify that the container is running by entering this command

docker ps

This should return the container that was just created in the previous step. The output should look something similar to below

Finally push the docker image created (i.e. xyz/sampleapi) to the Docker image Registry

docker push xyz/sampleapi:latest

As your application becomes more popular and users ask for more features, new microservices will need to be created. As the number of microservices increases, so is the complexity of deploying, monitoring, scaling and managing the communication among them. Luckily there are orchestration tools available that make this task more manageable.

Orchestrating Microservices

Managing a large number of Microservices can be a daunting task. Not only will you need to to track their health, you will also need to ensure that they are scaled properly, deployed without interrupting users and also recover when there are failures. The following are orchestrators that have been created to meet this need:

Service Fabric

Docker Swarm

DC/OS

Kubernetes

Stacking up these tools against one another is out of scope for this blog. Instead, I will focus more on how we can deploy the Web API we created earlier into Kubernetes using Azure Container Service.

Creating Kubernetes in Azure Container Service (ACS)

For this section, I am using a Windows Server 2016 machine. You can use any platform to achieve this but you might need to make a few minor changes to the steps detailed below

Deploying the Web API

To deploy to the Kubernetes cluster, a deployment descriptor is needed. Create a file called sampleapi.yml in the machine you have used to connect to the Kubernetes cluster and fill it with the following:

Periodically check whether the service is exposed. Keep running this command until the service gets an External-IP

C:\kubectl\kubectl.exe get services

Once the service is exposed, obtain its external IP (i.e. xxx.xxx.xxx.xxx) and check whether you can access the api by using either Rest client (i.e. Postman) or simply opening a web browser and entering:

http://xxx.xxx.xxx.xxx/api/values

Conclusion

Working with a large number of Microservices can be a challenging task. Also creating an orchestrator cluster of any type can sometimes be hard. With Azure Container Service (ACS), creating a cluster can be as easy as entering one command. Also Visual Studio can be used to jump start a project that has Docker support. I hope this blog was informative. My next blog will cover how CI/CD can be implemented with this scenario.

Nowadays, trends show that market conditions are changing constantly and at a pace we have never seen before. New companies come into mature industries and completely disrupt them while existing companies that have been around for a long time are struggling to survive and to hold on to their market share.

Also, building highly available and resilient software has become an essential competency no matter what business you are in. Nowadays, all companies are becoming software companies. Let’s take for example companies in the retail industry. Before, most companies in this industry competed on who can make products available on the shelves with the lowest possible price. Now companies pursue more advanced and sophisticated techniques to lure customers. Nowadays, it’s all about predicting customers’ behaviors by deeply understanding customers’ sentiments, brand engagements and history of their searches and purchases. There is no doubt that companies who are harnessing these capabilities are more successful and profitable than those that do not.

To win in such market conditions, not only do companies have to have the capabilities to build these kind of solutions, but also they have to build them faster than their competitors. This is why many organizations are rethinking how they are architecting and building solutions so that they can better embrace changes in customers’ and market’s demands. Also, the rise of cloud computing has made organizations embrace design approaches that allow pieces of solutions to be scaled independently to optimize infrastructure resources consumption.

Software Architecture Evolution

Looking at how software designs have evolved over the years, initially applications were mainly monolithic applications targeting desktops. As internet became more prominent, a new style of applications emerged, namely Client-Server. In this type of architecture, we ran some code on desktops while another part of the application ran on a remote server somewhere. The server component tended to group both business logic as well as some kind of data persistence mechanism. As applications grew, there was a need to separate the business logic from the data persistence layer so a new style emerged namely 3-tier in which presentation layer, business logic and data persistence all lived on separate layers. The separation of layers have grown beyond the 3-layers giving birth to the concept of the n-tier architectures in which teams can create flexible and reusable components. The figure below shows a depiction of this type of architecture

Although the architecture above divides the solution into multiple layers, multiple flaws can be pointed out:

The monolithic nature of the solution makes scaling the app not resource efficient

Code base for the app tends to be large which makes it harder to change and maintain

Collaborating on monolithic apps is challenging since multiple teams could be making changes to the code simultaneously which increases the likelihood of merge conflicts

A fatal error in one of the components could bring the entire solution down

Teams are bound to a specific technology stack for the entire solution

Because of the limitation mentioned above, a new design approach named Microservices has gained popularity in the last few years.

Microservices Overview

Microservices is an approach to application architecture in which an application is composed of small, independent and flexible components, each focusing on solving a single domain within the application. The figure below shows a depiction of such an approach

This approach solves a number of limitations imposed by monolithic applications. Some of the benefits you would get by adopting this approach include:

Ability to deploy subset of the system. If a customer is not interested in one of the modules, they can choose not to deploy that module

Ability to scale specific components of the system. If “Feature1” is experiencing an unexpected rise in traffic, that component could be scaled without the need to scale the other components which enhances efficiency of infrastructure use

Improves fault isolation/resiliency: System can remain functional despite the failure of certain modules

Easy to maintain: teams can work on system components independent of one another which enhances agility and accelerates value delivery

Flexibility to use different technology stack for each component if desired

Enhances the ability of developers to gain domain knowledge for a specific area

Allows for container based deployment which optimizes components’ use of computing resources and streamlines deployment process

In my next post I will drill deeper into the last point which is a suitable deployment strategy for such an architecture. Stay tuned…

]]>https://blogs.msdn.microsoft.com/najib/2017/01/03/what-are-microservices-and-why-should-you-care/feed/4Versioning and Deploying Salesforce Metadata using TFS/VSTShttps://blogs.msdn.microsoft.com/najib/2016/06/16/versioning-and-deploying-salesforce-metadata-using-tfsvsts/
https://blogs.msdn.microsoft.com/najib/2016/06/16/versioning-and-deploying-salesforce-metadata-using-tfsvsts/#commentsThu, 16 Jun 2016 09:26:23 +0000http://blogs.msdn.microsoft.com/najib/?p=45In this blog, I will show how you can use Visual Studio Team Services (VSTS) version control system to version control Salesforce.com Metadata. I will also show how the Build feature in VSTS can enable you to deploy Salesforce Metadata from one Salesforce development instance to a Salesforce QA instance. Finally, we will configure Continuous Integration (CI) so that each time you commit and push a change to a VSTS git repository, a build is triggered, which deploys to the Salesforce QA instance giving you immediate feedback on whether your changes broke anything in the Salesforce QA instance.

This walkthrough assumes some familiarity with Salesforce.com programming as well as git based version control system. Also, we will use VSTS but most steps apply to TFS as well. Note that you can get a VSTS account for free at https://www.visualstudio.com/products/visual-studio-team-services-vs

Create a Force.com project, connect it to the Salesforce development instance and retrieve its metadata

Store the retrieved metadata in VSTS git version control system

Set up a Continuous Integration build and deploy to the Salesforce QA instance

Verify the CI behaves as expected

Create a Team Project in VSTS called MySalesforceApp

Login to your VSTS account if you already have one. If not, create one here. Once signed up, you can create a new Team Project by clicking “New”

Enter the Team Project name, select a Process Template and version control as shown below and then click “Create Project”

Connect to MySalesforceApp from Eclipse IDE and clone the repository

In Eclipse, change Perspective to ‘’Team Foundation Server Exploring”. If you don’t see that perspective, get this plugin

Within Eclipse’s Team Explorer, go to “Home” -> “Projects” -> “Connect to Team Projects…”

Click “Servers” and add your VSTS url and then click “OK”

Once server is added, you can click next. You will be presented with a login page. Enter your username and password. Once authenticated, you will be presented with list of team projects.

Select the “MySalesforceApp” Team Project and then click “Finish”

In “Team Explorer “, click on “Git Repositories”, then right click on “MySalesforceApp” and click “Import Repository”

Select “MySalesforceApp” repository and click “Next”

Enter the clone parameters. Take note of the location in which you are cloning the repository. Make sure that this location is the same as the Eclipse Workspace location. You will need to create the Salesforce project in same location

Clicking “Next” would show the summary page. You can click “Next” and then choose “Create generic Eclipse projects for selected folders”

Click “Next” and then “Finish”

Create a Force.com project, connect it to the Salesforce development instance and retrieve its metadata

Before you create the “Force.com” project, make sure that Eclipse’s Workspace is at the same location as the location where the repository was copied in the previous step

To find the current Workspace location, go to “File” -> “Switch Workspace” -> “Other”. The workspace should show

Enter the project info. For “Project name” enter the same name as VSTS Team Project name (MySalesforceApp). Also, you will need to enter your Salesforce.com org credentials. Click “Next” and then click “Finish”.

Note: in some instances, a Security Token is required. If you don’t enter one, you would get an error window along with the instructions on how to get a valid Security Token.

At this point, the project structure should look like this:

Store the retrieved metadata in VSTS git version control system

To store the code in VSTS git repo, you will need to commit the changes and push the branch to the remote repository in VSTS. To commit the changes, you can either do so from the command line or from Eclipse by going to “Team Foundation Server Exploring” perspective and clicking “Git Repositories” and right clicking “MySalesforceApp” and selecting “Open”

Right click on the repo and select “Commit”

Check all the files and then click “Commit and Push”. This would add the project to the remote repository.

Click “Next” and then “Finish”. At this point you should have the code pushed to the repository in VSTS. To check that, login to VSTS and then go to the “Code” hub; there you can see the project we just pushed.

Setup Continuous Integration build

Get the Force.com Migration Tool from Salesforce. Login to Salesforce.com -> Setup -> Build -> Develop -> Tools. This page contains a number of tools made available by Salesforce.com.

The tool we are interested in is “Force.com Migration Tool”. Click on the link and the tool would download. Once downloaded, unzip the file and copy “ant-salesforce.jar” file.

Create a folder called “deploy” inside the force.com project and paste the “ant-salesforce.jar” file into the “deploy” folder

Once you add the code to Salesforce, import the changes to Eclipse by right click on “src” then “Force.com” then choose “Refresh from Server”

Once done, commit and push. This should trigger the build.

To verify the build has been triggered, login to VSTS and go to the “Build” hub, click on “CI Build” then click on Queued tab. If you don’t see anything in the list, there is a good chance that the build has already completed. You can check completed builds by clicking on “Completed” tab

This would deploy code to the Salesforce QA instance. To check the status in Salesforce, login to Salesforce, then go to “Setup” and then “Monitor” then “Deployment Status”

Let’s simulate a deploy failure by intentionally making the test case fail. In your Salesforce development instance, replace the code inside MathTest with this

Also, if you go to the code, you will notice that the change made in development instance was not deployed to the QA instance. We have just implemented CI for Salesforce metadata using VSTS.

We just showed how you can store Salesforce Metadata in VSTS git repository. We also showed how to setup Continuous Integration and deploy Salesforce Metadata to a QA environment. Please leave us your feedback.