Sometimes we have to handle a workload that is undefined, here I’m going to show how thi could be solved with Do Until and __variables.

The scenario is following: We need to collect all users from a system that has about 14000 users, the problem with this is that due to API restrictions we can collect about 150 users/call.
And there is a page handling that we need to use, so we need to get page 1 and page 2 and so on with the page size of 150. In addition to this there is a boolean called last_page to check if we are on the last page.

So how to solve this in Logic Apps? For each cannot be used, calling a function that would collect all users would take to long. Fortunally we can use the new variables actions in combination with the Do_Until action.
So what we will do is create a variable to keep the page number, increment it after each GET of users and then check if last_page is true and if it is end the Do until.

Do Until

The Do Until action is a looping action that continues to execute until a condition is met. Fortunaly Microsoft has provided some limits to make sure we don’t end up in endless loops :)
So there are Limits to make sure that the loop is not run more than x times default is set to 60 times or longer than a specified time, as default the timeout is set to 1 hour.
But ofcourse we will not need this since we will create good expressions, right? :)

Variable

Variables actions provides funtionality to store and handle variables, currently only Integer and float but hopefully soon even objects.
There are 3 actions assosiated with variables:

increment - increments the value of a specified variable with a configurable amount

initialize - initalizes a variable, sets a name of it, type (currently only integer or float) and a starting value.

set - sets the specified variable to a specific value

So in order to use variables we need to initialize them via the initalize action and if we want to change the value we can either increment it or set it with the appropiate actions
And use them later on in expressions or as in our case as a part of the URL in the HTTP action. We get the value via the following simple code:

@variables('page')

Build the solution

So let’s start adding these actions, when writing “variable” in the new action window 3 actions are showing.
Let’s start with initialize the variable.

Name it to page and set type to Integer and value to 1 since it’s the first page we should start with.

We will also add another variable to have in the condition for exit the Do Until loop:

Then we add the Do_Until action using the dountil flag, and also changing the limits since 100 might be to low in the future we set to a really high number of 500 and adding 1 hour in the timout to make sure we can process all users.

After this we add the HTTP action and some other actions needed for processing, after that we add an increment page action to increment the value and also a condition to check the last_page flag.

As the image shows I’ve used increment by 2 and an @or statment in the Condition, this is since I’m running two parallel executions to speed up the process, fetching 2 * 150 users per turn.
So anyone of them could be the last page and if anyone of them is we set the dountil variable to 1 to exit the loop.

Now we have created a flow that will collect users until the system tells us we have collected them all.

Thoughts

This is a good pattern and usefull but there are som considerations to take, when using the Do Until loops we don’t get the “pretty” logging that we get in the For Each loops where we can see all the runs, we just se the latest one.

Variables are really handy and easy to use, makes it possible to keep track of indexes and such when doing more complex workflows.

When you have developed your Logic App and it works well in the development envrionemnt it’s time to bring it to test and later on production.
If you have devloped it in Visual Studion you will find this easy as long as you don’t use any linked resources like another Logic App,
Azure Function, API Management or Integration Account, cause then the the next phase starts, the phase where you manually break down the resource id’s of these actions into concat statements with parameters and arm template functions.

So how does it look like?
Let’s start at an innocent Workflow action, a call to another Logic App, in the designer it looks like this (both web designer and VS)

The harcoded parts is the id of the subscription fake89f1-660a-4585-b4d8-bf5172cb7f70, the resource group name integration-weand the Logic App name INT001_Work_Order.
In order to make this possible to relate to in other subscriptions we need to remove the “hard codings” of subscription and resrource group, using arm template language read more at: https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-authoring-templates
We will use the subscription object to get the current subscription we are executing and it’s id via the subscriptionId property and the same for the resource group name via the resourceGroup object and the name property when then get the values for the current subscription and resource group we are deploying to.

It will look like this (I assume the logic app name will be same in the next environment).

As before we need to update it and it should be to something like this, note I’ve added a parameter for the function app container name, this wwill be diffrent in diffrent environments and to be on the safe side we added a parameter for the function name.

Api Management usage is the same, but here we also have some more things to consider instance name and the subscriptionkey (as experience says it’s easy to forgett adding a parameter as is needed to get a smooth deployment)

So as you can see there are som things to do here, and if you are as me a fan of the Web desginer after the development the next step in order to get this into VSTS would be to start building the arm template by hand..
I’ve found this very time consuming and error prone, so easy to make misstakes and it takes time to test the deployment. Also this often changes the Logic App abit and that might not be so good if test’s has been done on the implementation.

So in order to automate all this and more, i.e. pushing parameters in the logic app to the ARM template making it possible to change the parameters with a parameter file between environments.
We are using the Logic App Template Creator created by Jeff Holanhttps://github.com/jeffhollan/LogicAppTemplateCreator to automate this, it’s even possible to create the parameter file.
With this we can extract the Logic App, setup parameters for TEST/PROD and add it to VSTS and start deploying directly. Note that Gateway’s and their deployment are not supported yet and if you find bugs, help out or report them so we can fix them togheter.

The Template Generator will automate this and do the following:

Logic App (workflow)

In addition to fix the id to a generic concat path it also adds a parameters at the top if the referenced Logic App is in another resource group.

Logic App in the same resource group (assumes that in the next enviornment it will be the same)

Azure Function

Azure function are referenced via a function app and at the last the part the function name so all these parameters need’s to be parameters, also here we have the same function as with the Logic around the resource group.

API Managemenmt

API Management and API’s are referenced at the same way as all other so we need the same handling. Here it’s also important to be able to set the instance namen and sunscription key since they will change in the next environment.

Integration Account

When working with Integration Account’s these need to be set in the next environment aswell so if they are used there will be some parameters and settings added.

"parameters":{..."IntegrationAccountName":{"type":"string","metadata":{"description":"Name of the Integration Account that should be connected to this Logic App."},"defaultValue":"myiaaccountname"},"IntegrationAccountResourceGroupName":{"type":"string","metadata":{"description":"The resource group name that the Integration Account are in"},"defaultValue":"[resourceGroup().name]"}..."integrationAccount":{"id":"[concat('/subscriptions/',subscription().subscriptionId,'/resourcegroups/',parameters('IntegrationAccountResourceGroupName'),'/providers/Microsoft.Logic/integrationAccounts/',parameters('IntegrationAccountName'))]"}

Parameters

Parameters added inside the Logic App via for now Code View will be pushed and added as a ARM template parameter, so if you have this parameter inside your Logic App.

After running the extractor this parameter will be a ARM template parameter and the defaultValue will be set to the value of that ARM template parameter, so now this can be set at deployment time via the parameters file.

We are currently working on more, like trigger functionality to get frequency, intervall settings put up.
And also functions to handle special triggers like file, where the folder path is base64 encoded and built up in a special way.

Be aware that right now the Gateway Connectors are strangely removing the user/pass if they are not supplied so they will break, remove them manually from the template and create them manually for now. A fix is comming to not include them.
But for now hope this helps out!

Here is a sample of a powershell script that will take out the Logic App, generate a template and create a parameter file

#if you have problem with execution policy execute this in a administrator runned powershell window.#Set-ExecutionPolicy -ExecutionPolicy Unrestricted#we have th module in a folder structure like this and it's needed to be included
Import-Module ".\LogicAppTemplateCreator\LogicAppTemplate.dll"#Credentials will "flow" if you are logged in.#Set the name of the Logic App$logicapname='INT001_Work_Order'#Set the resource group of the Logic App$resourcegroupname='integration-we'#Set the subscription id of where the Logic App is$subscriptionid='fakeb73-d0ff-455d-a2bf-eae0b300696d'#Set the tenant to use with the Logic App, make sure it has the right tennant$tenant='ibiz-solutions.se'#setting the output filename$filenname=$logicapname + '.json'$filennameparam=$logicapname + '.parameters.json'
Get-LogicAppTemplate -LogicApp $logicapname -ResourceGroup $resourcegroupname -SubscriptionId $subscriptionid -TenantName $tenant | Out-File$filenname#$filenname = $PSScriptRoot+"\"+$filenname
Get-ParameterTemplate -TemplateFile $filenname | Out-File$filennameparam

As you can see this will handle all of the nessisary concat setups, it automatically add’s paramters for the import things like the API Management instance name and subscription key to make sure the parameters are not forgotten.
This makes it better and easier to use the Web designer rather than VS since in VS you still need to this work or export it afterwards (but then what is the point in using VS?).
With this it’s possible to just develop in the Web designer as I prefer and then extract the template, verify it and update with parameters (prefereable in the Logic App so you can reextract the template to make sure all is working before the deployment) and put into VSTS for deployment.

When deploying api’s to API Management via GIT we can run into the error “could not find backend service with id abc”, this comes from the SOAP handling added recently in API Management.

If we take a look inside the GIT repositiory, there is a new folder added in GIT called “backend”.

If we look inside there are some folders named after the url’s.

Inside these files we find a configuration.json that looks like this:

In one of these files you will find the id that is matching the error message, make sure it is added in the commit.
Done?
If this API is created with this commit (aka first time deployed) the error will unfortunately persist, this is due to a minor bug.
To solve this we can temporary remove the set-backend policy and in the SOAP case we find this in “policies/api/{apiname}.xml”

File looks like:

Temporary remove the set-backend service, create a commit and deploy the api, after add the backend service again and create a new commit and now the deployment will work and the backend will be set correctly.

Developing in Azure API Management is done inside the Azure portal or in the Publisher Portal that comes with the API Management instance, developing and testing is easy, but when it comes to move the developed API to the next instance (Test or Prod) it becomes tricky.

In this demo instance I have 2 API’s and logic applied to handle authentication and formatting.

Where is the code?

When developing API’s in API Management publisher portal you quickly learn that API signature and policy codes are separated and handled in different areas of the portal, API and Policy. This experience is not as clear in the new API blade inside the Azure Portal where a lot of work has been done to make it easier to understand the API’s and see the whole picture at one place.

But the difference is quite important, the API is the “external” definition needed by clients to understand and implement our API and the Policy part is logic that is executed when a call is made to the API.

Exporting the API; Press export and select “OpenAPI specification”

Inspecting the file shows that only the swagger definition is exported and not the logic added to the

We now test to extract the code using the GIT option, in the publisher portal under “Security” -> “Configuration repository”

First Press the “Save configuration to repository”, note that after the save is completed the Git icon on the top right corner changes color to green.

Get the clone URL and note that the Username is “apim”, scroll down and generate a password

Now use the credentials and password to connect.
Tip:_ if you are using PowerShell or another tool that requires the authentication in the form, remember to URL encode the password:_

All of these we can copy to the next instance, but! There are some things to take into consideration; each group/product etc. has a unique ID (if these are manually created in the instances it’s not guaranteed to be the same and the import won’t work in another instance, so import everything you need)

Import changes into a API Management instance

The next step is to import these files to another API Management instance, so to get started, we need to clone the code from this API Management instance:

Copy the files and commit, after that we need to deploy the changes in the repository to the API Management instance and once again we need to go to the “publisher” portal and “Security” -> “Configuration repository”. Scroll down and press the “Deploy repository configuration” and the changes are applied.

Exposing services like the SAP Gateway is an important task for API Management but not always so easy. Security is often high around these products so we need to understand this in order to get the setup correct.
To get started let’s look at the setup that we were facing.
SAP Gateway is hosted inside the OnPrem network behind several firewalls and hard restrictions, API Management is hosted in Azure. In this case we had an Express Route setup ready to be used so we could deploy an API Management inside a preexisting subnet. Firewall rules where setup based on the local IP of the API Management instance (according to the information I have these should be static when deployed in a VNET).

After API Management is deployed inside the network, this done by setting the External Network in the network tab to a subnet of the VNET that is connected to the Expres Route. Make sure that API Management is alone in this subnet, see image for more informaton

After this is done we set up the routing/firewall rules, to get the local ip of the API Management instance we put up a vm with IIS and created a API inside API Managment that called the standard IIS start page, after that we searched the loggs in IIS to get the local IP.

Now we can start creating the API and I’ll jump right in to the policy settings of the API that is created to expose the SAP Gateway, the Security model on SAP GW can vary but we in this case we had Basic Authentication as the authentication mode. Adding handling of this is quite straightforward for API Management, there is a policy ready to be used “Authenticate with Basic”:

So we started off adding the authentication part, now the easy part is done.
When calling the API we just got a 403 Forbidden response saying “Invalid X-CSRF-Token”, looking around this we found that it’s the Anti Forgery setup with SAP Gateway. To be able to handle this a token and a cookie is needed and the token and cookie are retrieved via a Successful GET call to the Gateway.
The initial call is using the same URL (make sure that the GET operation is implemented so the result is successful (return 200 OK) otherwise the token is not valid.
Since I had no access to the SAP GW without API Management my testing’s are done from Postman via API Management to SAP GW.
Adding the “X-CSRF-Token” header with value Fetch will retrieve the token and cookie making the call like:

The response looks like:

The interesting part is the Headers and Cookies, let’s have a look, under Headers we find the X-CSRF-TOKEN that we need to use in the response back.

In the Cookies we find 2 items and we are interested in the one called “sap_xsrf..” this is the anti-cross site forgery cookie that is needed in the POST/PUT request to SAP Gateway.

Composing these makes a valid request look like this:

So now we can do these two requests to get a valid POST call in to SAP Gateway from Postman, so let’s move on to setting this up in API Management.
In API Mangement we don’t want the client to understand this and/or force them to implement the two calls and the functionality to copy headers and so on, we just want to expose a “normal” API.
So we need to configure API Management to do these two calls to the SAP Gateway from API Management in the same call from the client to be able send a valid POST request to SAP Gateway.

In order to make this work we need to use polices, so after setting up the standard POST request to create Service Order we will go to the Policy tab.

In the beginning it looks something like this:

So the first thing we need to do is to a send-request policy, I configured it wit mode new and used a variable name of fetchtokenresponse.
Since retrieving the token is to do a GET request to the SAP Gatway we reuse the same URL to the API (after rewrite). Setting the header X-CSRF-Token to Fetch since we are fetching the token and adding the authorization header with the value for Basic authentication
So let’s start with creating the call to do the GET token request. let’s add this code in the inbound section.

Next step is to extract the values from the send-request operation and add them to our POST request, setting the X-CSRF-Token header is fairly straightforward so we use the set header policy and retrieves the header from the response variable, the code looks like:

A bit trickier is the Cookie, since there are no “standard” cookie handler we need to implement some more logic, in the sample I provide I just made a lot of assumptions on the cookie. We need the cookie starting with sap_XSRF I started by splitting all cookies on ‘;’ and finding the cookie that contained “sap_XSRF”, this had in our case also a domain that I didn’t need so I removed it by splitting on ‘,’ (comma) and used the result in a set-header policy.

The result is now complete and the client will assume/think that this is just a regular API as the other ones and we can expose this with regular API Management API-Key security.

Wrap up

Exposing the SAP Gateway is not a straight forward task but after understanding the process and being able to implement the SAP Gateway complexity inside API Management we can expose these functions just as any other API, I would suggest adding some Rate call limits and quotas to protect the gateway from overflow. This scenario proves the value of API Management and provides possiblities to solve complex authentication and anti forgery patterns in order to standardise your own API Facade with the same auth mechanism for the client without tacking the backend security in consideration.

When it comes to deploy and realease management in VSTS we need to connect to our subscription.

This is done via Service Endpoints created inside VSTS, there are two ways of authentication to Azure subscriptions, with User Account or AAD Application, typically scenarios for AAD Applications is when the subscription is not in your tenant or when you don’t have access to the subscription with the appropiate role or so.

1 User Account: The subscription is accessible with your user account

When you use the same user account when logging in to VSTS and your Azure Subscription the Azure Subscription is auto discovered and can be picked under the headline “Available Azure Subscriptions”

Pick the Subscription

Press the Authorize button to make VSTS create the required authorization

If you have the required access you can now start deploying to this subscription

2 AAD Application: The subscription is not accessible with your user account

If the VSTS user dosen’t have access to the subscription it will not be listed in the Subscription list under “Available Azure Subscriptions” and we will need to add it manually.

Press the Manage button

A new tab is opened and you will come to the Service tab

Press the “New Service Endpoint”

A new dialog is opened and you can now create a Connection to an existing accessible subscription, just as before but we want to create it based on Service Principal so press the “here” link

The dialog is transformed and now we can add the full information from an AAD application.
Connection name:_ enter a name for the Connection i.e “Customer A TEST”Subscription ID: the guid for the subscription_Subscription Name:_ A friendly understandable name of the subscription; we often use the same as Connection name i.e. “Customer A TEST”Service Principal Client ID: this is the AAD Application Client ID_Service Principal Key:this is the AAD Application KeyTenant ID: the guid for the tenant
Sample from an AAD Application in Azure, this is how you find the values:
TenantID: is the** Directorty ID** and found on the Properties section of the Azure AD Directory.
Service Principal Client ID: _This is the **Application ID **of the AAD Application._Service Principal Key: Is the Key on the Azure AD Application and is found under keys, generate a new key (only visible 1 time after save)

Verify the Connection.
Tip: if the verification failes make sure that the AAD Application has atleast “Contributor” rights at atleast one resource Group (not just only the subscription)

Press Ok

This service endpoint can now be found in the subscription list.

I prefer using AAD Applicaiton Connection stetup on Production Environments, just to make sure there are no “personal account” Connections that can mess things up.

Working with the Connector

It’s fairly easy to get started with the Connector, selecting data and create/update records, but as soon as the business requirments lands on our table it gets a bit trickier. Often we need to handle scenarios where we link relations and/or assign ownership to the entity we are createing/updating. Then we need to know a little more about the setup in Dynamic CRM and our best friend for this is the Settings and Customizations tab.

By then selecting Customize The System a new window will open and it’s now possible to go in and verify the entity, check keys, relations etc.

Using Navigational properties

Navigational properties are used to link entities together, these can be set directly within the body of the update/insert request. This will then be linked correctly inside Dynamics CRM Online, an example would look like this, where we are setting the currency (transactioncurrency) of the account using the navigational property_ transactioncurrencyid_value, example:

Setting owner

A frequent used operation is Assigning owner, this is now implemented so it’s possible to do it directly via the Update/Create operation instead of a separate operation as previously.

In the Dynamic CRM Online Connector it’s easy to set the owner just apply the following fields _ownerid_type wich is set to either systemusers or teams _depending on the owner type next is the __ownerid_value wich is the key to the user, an example as bellow:’

Lessons learned

Don’t overflow the CRM! Since Logic Apps is really powerfull in parallelism it’s good to have some sort of control over how many new instances are created and executed against Dynamics CRM. We usually try to make sure that there are no more than 200 parallel actions towards a CRM at any given time.

Learn how to check fields, properties and keys since you will get stuck at errors when sending in the wrong type and then you will need to check what type it is.
OptionSets are commonly used and good from GUI perspective but it’s not as good in integration since it’s translated to a number that we from integration often need to translate to a code or text but learning how to check the values inside CRM will speed up this process.

When we started using the Connector there were problems with assigning ownership and handling float numbers, these where fixed early by the product group and since then we haven’t found any issues with the way we are using the Connector.

One of BizTalk’s greatest strengths is that messages are not lost. Even if there is an error processing a message we still get a suspended message that an administrator can handle manually.
Sometimes though that is not the preferred way of handling errors.
When working with web API’s it is common to let the client handle any errors by returning a status code indicating if the request was successful or not, especially if the error is in the client request.

There is a standard pipeline component included in BizTalk that can validate any incoming XML message against its XSD schema, however if the validation fails the request will only get suspended and all the client receives is a timeout message and an administrator will have to handle the suspended message even though there is not much to do about it since the client connection is closed.

One way of handling this is to do the validation in an orchestration, catch any validation errors and create a response message to return but writing an orchestration for just handling validation doesn’t make much sense with the performance implications and all.

A better solution would be if we could:

Abort the processing of the request as early as possible if there is a validation error.

Leverage the existing component for validating messages.

Return a HTTP 400 status code to indicate to the client that the request was bad.

Avoid suspending the message in BizTalk.

Since the receive pipeline is the earliest stage we can inject custom code in, a custom pipeline component would be the best fit.

The pipeline component would need to execute the standard XmlValidator component and catch any validation error thrown by the validator. We could of course log any validation errors to the event log if we still would like some error logging.

If a validation error was caught, we need to create a new message with the validation error details so the client understand what is wrong.

The context property OutboundHttpStatusCode should be set to 400 so that the adapter knows to return the message with a status code of 400 bad request.

To prevent any further processing of the message and indicate to BizTalk that the new message is the response to return to the client, a number of context properties related to request response messages need to be set. Correlation tokens are copied from the request message to make the response go through the same receive port instance.

This component is available as a part of iBiz Solutions open source initiative. The source code is available on GitHub and the component is also available on NuGet.

At iBiz Solutions we are heavy users of Visual Studio Release Management. Visual Studio Release Management in combination with Team Foundation Server for source control and TFS Build Services creates the Application Lifecycle Management control that we need in our projects. It enables us to have full control over exactly what gets built and what build packages that are released to what environment and so on.

Visual Studio Release Management is as mentioned a critical tool for us, but it still has a few places where the tool could do with some improvements. A critical features is of course the ability to get an overview of which build that is released into what environment - the current version however is not very efficient when it comes to search and filter the list of releases.

Another missing feature is the ability to delete previous releases. At first this sounds like a bad idea and that one should save all releases as they might provide important informationation in the future. But there are however situations where one makes stupid mistakes and where releases just clutter the bigger picture and makes it harder to actually see the releases that are important. An efficent way of filtering, or a way of saying that this release is no longer relevant, might have solved the issue but as mentioned this does not exist in the current version of the tool.

Long story short. Here is the script that we run directly against the database to delete specific releases and all their related information in Visual Studio Release Magagement.

When consuming an Api App from c# the Azure App Service SDK helps us generate Client Code to improve the develoment process. (read more here)

When testing the generated code you might get an error of ‘Method Not Allowed’ and status code 405 even if the security settings is correct and the API app works perfectly when trying to use it via Logic App, Postman/fiddler/swagger etc.

If the thrown exception looks something like this:

Then the problem probably is an incorrect generated URI from the code generator where it has used http instead of https. (common in several places with API Apps always should be https)

To check this go in to the class of the ServiceClient ( in my case FlatFileEncoder ) and check the base URI settings, as you can see on the image bellow mine where generated with http instead of https.

Changing the URI from http to https and it starts working, my code is executed and the result from the FlatFileToXML function is returned as expected.