Sound Code - Mark Heath's Bloghttps://markheath.net/Mark Heath's development blog.MarkBlog2019-01-17T00:00:00Zhttps://markheath.net/post/arm-vs-azure-cliARM Templates vs Azure CLI2019-01-17T00:00:00Z2019-01-17T00:00:00ZMark Heathtest@example.com<p>Recently, I've been posting tutorials about how to <a href="https://markheath.net/post/deploying-azure-functions-with-azure-cli">deploy Azure Function Apps with the Azure CLI</a> and <a href="https://markheath.net/post/managed-identity-key-vault-azure-functions">create a managed identity to enable your Function App to access Key Vault</a>. I love how easy the Azure CLI makes it to quickly deploy and configure infrastructure in Azure.</p>
<p>But is the Azure CLI is the right tool for the job? After all, aren't we supposed to be using ARM templates? If you've not used them before, <a href="https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-authoring-templates">ARM templates</a> are simply JSON files describing your infrastructure which can be deployed with a single command.</p>
<p>My general recommendation is that while the Azure CLI is great for experimenting and prototyping, once you're ready to push to production, it would be a good idea to create ARM templates and use them instead.</p>
<p>However, in November, an interesting tweet caught my eye. Pascal Naber <a href="https://pascalnaber.wordpress.com/2018/11/11/stop-using-arm-templates-use-the-azure-cli-instead/">wrote a blog post</a> making the case that ARM is unnecessarily complex compared to just using the Azure CLI. And I have to admit, I have some sympathy with this point of view. In the article he shows a 200+ line ARM template and contrasts it with about 10 lines of Azure CLI to achieve the same result.</p>
<blockquote class="twitter-tweet" data-lang="en"><p lang="en" dir="ltr">I’ve written a new blogpost: “Stop using ARM templates! Use the Azure CLI instead”. Read it here: <a href="https://t.co/MhOeKpvnDR">https://t.co/MhOeKpvnDR</a> <a href="https://twitter.com/hashtag/azure?src=hash&amp;ref_src=twsrc%5Etfw">#azure</a> <a href="https://twitter.com/hashtag/cli?src=hash&amp;ref_src=twsrc%5Etfw">#cli</a> <a href="https://twitter.com/hashtag/arm?src=hash&amp;ref_src=twsrc%5Etfw">#arm</a> <a href="https://twitter.com/hashtag/devops?src=hash&amp;ref_src=twsrc%5Etfw">#devops</a> <a href="https://twitter.com/Xpiritbv?ref_src=twsrc%5Etfw">@Xpiritbv</a></p>&mdash; Pascal Naber (@pascalnaber) <a href="https://twitter.com/pascalnaber/status/1061843688677085184?ref_src=twsrc%5Etfw">November 12, 2018</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
<p>So in this article I want to give my thoughts on the merits of the two different approaches: <strong>ARM templates</strong> which are a very <em>declarative</em> way of expressing your infrastructure (i.e. <em>what</em> should be deployed), versus <strong>Azure CLI</strong> scripts which represent a more <em>imperative</em> approach (i.e. <em>how</em> it should be deployed).</p>
<h3>Infrastructure as Code</h3>
<p>The term &quot;<a href="https://docs.microsoft.com/en-us/azure/devops/learn/what-is-infrastructure-as-code">infrastructure as code</a>&quot; is used to express the idea that your infrastructure deployment should be <strong>automated</strong> and <strong>repeatable</strong>, amd the &quot;code&quot; that defines your infrastructure should be <strong>stored in version control</strong>. This makes a lot of sense: you don't want error-prone manual processes to be involved in the deployment of your application, and you want to be sure that if all your infrastructure was torn down, you could easily recreate <em>exactly</em> the same environment.</p>
<p>But &quot;infrastructure as code&quot; doesn't dictate what file format or DSL our infrastructure should be defined in. The most common approaches are JSON (used by ARM templates) and YAML (used by Kubernetes). Interestingly, <a href="https://docs.microsoft.com/en-us/azure/service-fabric-mesh/service-fabric-mesh-overview">Service Fabric Mesh</a> has introduced a YAML format that gets converted behind the scenes into an ARM template, presumably because the YAML allows a simpler way of expressing the makeup of the application (we'll come back to this idea later).</p>
<p>However, there's no obvious reason why a PowerShell or Bash script couldn't also count as &quot;infrastructure as code&quot;, or even an application written in JavaScript or C#. And thanks to the <a href="https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest">Azure CLI</a>, <a href="https://docs.microsoft.com/en-us/powershell/azure/overview?view=azps-1.0.0">Azure PowerShell</a>, <a href="https://docs.microsoft.com/en-us/dotnet/azure/">Azure SDK for .NET</a> and <a href="https://docs.microsoft.com/en-us/javascript/api/overview/azure/?view=azure-node-latest">Azure Node SDK</a>, you can easily use any of those options to automate deployments.</p>
<p>The key difference is not whether both approaches count as &quot;infrastructure as code&quot;, but the idea that <em>declarative</em> ways of defining the infrastructure are better than <em>imperative</em>. A JSON document contains no logic - it simply expresses all the &quot;resources&quot; that form the infrastructure and their configuration. Whereas if we choose to write a script using the Azure CLI, then it is inherently <em>imperative</em> - it describes the steps required to provision the infrastructure.</p>
<p>So which is best?</p>
<h3>Declarative</h3>
<p>Well the received wisdom is definitely that declarative is best. Azure strongly encourages the you to use JSON based ARM templates, Service Fabric Mesh and Docker use YAML, and other popular Infrastructure as code services like <a href="https://www.terraform.io/">Terraform</a> have <a href="https://github.com/terraform-providers/terraform-provider-aws/blob/master/examples/two-tier/main.tf">their own file-format</a> designed to be more a <a href="https://www.terraform.io/docs/configuration/index.html">more-readable alternative to JSON</a>.</p>
<p>In most cases, you are simply defining the &quot;resources&quot; that form your infrastructure - e.g. I want a SQL Server, a Storage Account, an App Service Plan and a Function App. You also get to specify all the properties: what location should the resources be, what pricing tier/sizing do I want, what special configuration settings do I need to enable? Most of these formats also allow you to include the application code itself as a configuration property: you can specify what version of a Docker image your Web App should run, or what GitHub repository the source code for your Function App can be found in, allowing a fully-working application to be deployed with a single command.</p>
<p>There are several key benefits to the declarative approach. First of all, it uses a <strong>desired state</strong> approach, which allows for <strong>incremental</strong> and <strong>idempotent</strong> deployments. In other words, your template defines what resources you want to be present, and so the act of deploying that template will only take effect if those resources are not already present, or are not in the state you requested. This means that deploying an ARM template is idempotent - there is no danger in deploying it twice - you won't end up with double of everything, or errors on the second run-through.</p>
<p>There are some other nice benefits to declarative template files. They can be <strong>validated</strong> in advance of running, greatly reducing the chance that you could end up with a half-complete deployment. The underlying deployment engine can intelligently optimize by identifying which resources are needed first and what steps can be performed <strong>in parallel</strong>. Any logic to retry actions in the case of transient failures is also built into the template deployment engine. And templates can be <strong>parameterized</strong>, allowing you to use the same template to deploy to staging as well as production. Parameters also enable you to avoid storing secrets in templates.</p>
<p>But it's not all great. Declarative template formats like ARM tend to suffer from a number of weaknesses. The templates themselves are often very <strong>verbose</strong>, especially if you get a tool to auto-generate them, and if you prefer to hand-roll them, the documentation is often sparse, and its a cumbersome and error prone process. When I build ARM templates I usually start by copying one of the <a href="https://github.com/Azure/azure-quickstart-templates">Azure Quickstart templates</a> and adapting it to my needs. But often that requires me to also visit <a href="https://resources.azure.com/">resources.azure.com</a> to attempt to deduce what template setting is needed to enable a feature I only know how to turn on via the portal. It can be a painfully slow and error-prone process.</p>
<p>Another issue is that although YAML and JSON files are touted as being &quot;human readable&quot;, the fact is that they quickly lose their readability once they go beyond a screen-full of text, as Pascal's example clearly demonstrated.</p>
<p>And there are some practical annoyances. For example, a while ago I deployed a resource group that used some secrets. I parameterized them in the template (as is the best practice), and so when I initially deployed the ARM template, I provided those secret values. But the trouble was, now <em>every</em> time I wanted to redeploy the template because of some other unrelated change, I needed to source those secret values again even though they weren't modified. There didn't seem to be an obvious way of asking it to simply leave those secrets with the values they had on a previous deployment.</p>
<p>And this brings me onto the final issue that you inevitably run into with these templates. They end up requiring their own pseudo-programming language. In ARM templates, there are often dependencies between items. I need the Storage Account to be created before the Function App, because the Function App has an App Setting pointing at the connection string for the Storage Account. In the case of a web app that talks to a database it might be even more complex, with the database needing the web app's IP address in order to set up firewall rules, while the web app needing the database's connection string, resulting in a circular dependency.</p>
<p>The ARM template syntax has the concept of 'variables' which can be calculated from parameters, and can be manipulated using various helper functions such as 'concat' and 'listkeys' as you can see in the following example:</p>
<pre><code class="language-js">{
&quot;name&quot;: &quot;AzureWebJobsStorage&quot;,
&quot;value&quot;: &quot;[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';AccountKey=', listKeys(variables('storageAccountId'),'2015-05-01-preview').key1)]&quot;
},
</code></pre>
<p>And this seems to be an inevitable pattern in any declarative template format that attempts to define something moderately complex - you end up wanting regular programming constructs, such as conditional expressions, string manipulations, and loops. Here's a snippet from an API Management policy defined in XML I saw recently that you can see has also introduced a level of scripting.</p>
<pre><code class="language-xml">&lt;set-header name=&quot;X-User-Groups&quot; exists-action=&quot;override&quot;&gt;
&lt;value&gt;
@(string.Join(&quot;;&quot;, (from item in context.User.Groups select item.Name)))
&lt;/value&gt;
&lt;/set-header&gt;
</code></pre>
<p>The frustration I have with these DSLs within templates is that they are very limiting, lack support for intellisense and syntax highlighting, and tend to make our templates more indecipherable and fragile. Escaping values correctly can become a real headache as you can find yourself encoding JSON strings within JSON strings.</p>
<h3>Imperative</h3>
<p>So why not just write our deployment scripts in a regular scripting or programming language? There are some obvious benefits. The language already has familiar syntax, supporting conditional steps, storing and manipulating variables for later use, generating unique names according to a custom naming conventions, and much more. Our editors can help us with intellisense, syntax highlighting and refactoring shortcuts.</p>
<p>Also, we can follow the principles of &quot;clean code&quot; and extract blocks of logic into reusable methods. So I might make a methods that knows how to create an Azure Function App configured just the way I like it, with specific features enabled, and specific resource tags that I always apply. This allows the top-level deployment script/code to read very naturally whilst hiding the less intersting or repetitive details at a lower level.</p>
<p>For example, the <a href="https://docs.microsoft.com/en-gb/dotnet/api/overview/azure/appservice?view=azure-dotnet">fluent Azure C# SDK syntax</a> gives an idea of what this could look like. Here's creating a web app:</p>
<pre><code class="language-cs">var app1 = azure.WebApps
.Define(&quot;MyUniqueWebAddress&quot;)
.WithRegion(Region.USWest)
.WithNewResourceGroup(&quot;MyResourceGroup&quot;)
.WithNewWindowsPlan(PricingTier.StandardS1)
.Create();
</code></pre>
<p>And you could easily build upon this approach by defining your own custom extension methods.</p>
<p>Just like ARM templates, imperative deployment scripts can easily be parameterized, ensuring you keep secrets out of source control, and can reuse the same script for deploying to different environments.</p>
<p>But imperative deployment scripts like this do potentially have some serious drawbacks. The first is: what about <strong>idempotency</strong>? If I run my script twice, will it fail the second time because things are already there? Can it work out what's missing and only create that? Well, we don't want to bloat our script to have to put lots of conditional logic in, checking if a resource exists and only creating it if it is missing, but it turns out that it's not all that hard to achieve. In fact, Pascal Naber recently posted a gist showing an <a href="https://gist.github.com/pascalnaber/75412a97a0d0b059314d193c3ab37c4c">idempotent bash script using the Azure CLI</a> to deploy a Function App configured to access Key Vault. You can safely run it multiple times.</p>
<p>For example if I run the following Azure CLI commands multiple times, I won't get any errors:</p>
<pre><code class="language-powershell">az group create -n &quot;IdempotentTest&quot; -l &quot;west europe&quot;
az appservice plan create -n &quot;IdempotentTest&quot; -g &quot;IdempotentTest&quot; --sku B1
</code></pre>
<p>But what about the <strong>desired state</strong> capabilities of a declarative framework like ARM templates? What if we wanted a Standard rather than Basic tier app service plan? Let's try:</p>
<pre><code class="language-powershell">az appservice plan create -n &quot;IdempotentTest&quot; -g &quot;IdempotentTest&quot; --sku S1
</code></pre>
<p>And this works - our app service plan gets upgraded to the standard tier! Let's make it harder. What if we decide it should be a Linux app service plan:</p>
<pre><code class="language-powershell">az appservice plan create -n &quot;IdempotentTest&quot; -g &quot;IdempotentTest&quot; `
--sku S1 --is-linux
</code></pre>
<p>And now we get an error - <em>&quot;You cannot change the OS hosting your app at this time. Please recreate your app with the desired OS.&quot;</em> Although, to be fair, I'm not sure an ARM template deployment would fare any better attempting to make this change. Not all modifications to desired state can be straightforwardly implemented.</p>
<p>To be honest, I was a little surprised by this. I hadn't realised the Azure CLI had this capability, and it makes it a much more competitive alternative to ARM templates. I haven't tried the same thing with the Azure for .NET SDK - that would be in interesting experiment for the future.</p>
<p>This leaves me thinking that ARM templates actually offer very few tangible benefits over using a scripting approach with Azure CLI. Perhaps one weakness of the scripting approach is that idempotency certainly is not automatic. You'd have to think very carefully about what the conditional steps and other logic in your scripts were doing. For example, if you generate a random suffix for a resource name like I do in many of my PowerShell scripts, then straight off you've not got idempotency - you'd need custom code to check if the resource already exists and find out what random suffix you used last time.</p>
<p>But it's interesting that we are starting to see this approach to infrastructure as code gaining momentum elsewhere. I've not had a chance to play with <a href="https://www.pulumi.com/why-pulumi/delivering-cloud-native-infrastructure-as-code/">Pulumi</a> yet, but it seems to be taking a very similar philosophy - define your infrastructure in JavaScript, taking advantage of the expressiveness, familiarity, reusability and abstractions that a regular programming language can offer.</p>
<h3>The Verdict</h3>
<p>There are good reasons why ARM templates are still the recommended way to deploy resources to Azure. They help you avoid a lot of pitfalls, and still have a few benefits that are hard to replicate with a scripting or regular programming language. But they come at a cost of complexity and are generally unfriendly for developers to understand and tweak. It feels to me like we're not too far away from code-based approaches being able to offer the same benefits but with a much simpler and more developer-friendly syntax. The Azure CLI already seems very close so long as you take a sensible approach to what additional actions your script performs.</p>
<p>Maybe what's needed is simply a much easier way to generate the templates in the first place - if I can write a very simple script that produces an ARM template, then I don't need to worry about how verbose the resulting template is. It seems to me that's what the Service Fabric Mesh team decided by choosing to create a YAML resource definition that gets compiled into ARM. (Although I'm sure that before long that YAML will start adding DSL like constructs for things like string manipulation).</p>
<p>Anyway, thanks for sticking with this rather long and rambling post. I'm sure there's a lot more that could said on the strengths and weaknesses of both approaches, so I welcome your feedback in the comments!</p>
<img src="http://feeds.feedburner.com/~r/markdotnet/~4/G1YbEsN9n50" height="1" width="1" alt=""/>https://markheath.net/post/arm-vs-azure-clihttps://markheath.net/post/run-from-packageDeploy Web and Function Apps with Run from Package2019-01-14T00:00:00Z2019-01-14T00:00:00ZMark Heathtest@example.com<p>Back in Feb 2017 I <a href="https://markheath.net/post/deploy-azure-functions-kudu-powershell">wrote about</a> how you can deploy an Azure Web App by zipping it up and pushing it to App Service with the Kudu REST API. But later that year, a much better new &quot;zip deploy API&quot; was announced, and I wrote <a href="https://markheath.net/post/deploy-azure-webapp-kudu-zip-api">another article</a> explaining how to use that. However, more recently, an even newer approach, known as <a href="https://docs.microsoft.com/en-us/azure/azure-functions/run-functions-from-deployment-package">&quot;run from package&quot;</a> has been announced, and is arguably now the best way to deploy your web apps and function apps.</p>
<p>So in this post, I'll show some examples of using &quot;Run from Package&quot; to deploy a simple website. As usual, I'll be using the Azure CLI, from PowerShell.</p>
<p>The way &quot;Run from Package&quot; works is that you simply set up a special App Setting called <code>WEBSITE_RUN_FROM_PACKAGE</code> and its value tells App Service where to find the zip containing your application. There are actually two options available to us. The zip file can be stored at any publicly available URI, so you can just point at a zip file in Azure Blob Storage. Or you can just upload the zip file directly to App Service and update a text file that points at it. We'll see both options in action.</p>
<h3>Step 1 - Create an empty web app</h3>
<p>We'll start off by creating a resource group, an app service plan and then putting an empty web app in.</p>
<pre><code class="language-powershell">$location = &quot;West Europe&quot;
$resGroupName = &quot;RunFromPackageDemo&quot;
az group create -n $resGroupName -l $location
$appServicePlanName = &quot;RunFromPackageDemo&quot;
az appservice plan create -n $appServicePlanName -g $resGroupName --sku B1
$webAppName = &quot;runfrompackagedemo1&quot;
az webapp create -n $webAppName -g $resGroupName --plan $appServicePlanName
</code></pre>
<p>It is at this point that we could have used the existing zip <a href="https://docs.microsoft.com/en-us/azure/azure-functions/deployment-zip-push">deploy API</a> to upload a zip of our application directly to this web app. Behind the scenes, the API would unzip the contents of the uploaded zip into the <code>wwwroot</code> folder. It's very easy to automate this with the Azure CLI:</p>
<pre><code class="language-powershell">az webapp deployment source config-zip -n $webAppName `
-g $resGroupName --src myApp.zip
</code></pre>
<p>But let's not do that for now. Instead, we'll see how to use Run from Package.</p>
<h3>Step 2 - Upload the zip to blob storage and generate a SAS token</h3>
<p>If we are opting for the approach where our <code>WEBSITE_RUN_FROM_PACKAGE</code> points at URI, we need somewhere to store it, and an Azure blob storage container is a good choice. The recommendation is to use a private container, and generate a SAS token to secure access to the zip.</p>
<p>Here's how we could use the Azure CLI to automate creating a new storage account with a private container to store our zip files:</p>
<pre><code class="language-powershell">$storageAccountName = &quot;runfrompackagedemo1&quot;
az storage account create -n $storageAccountName -g $resGroupName `
--sku &quot;Standard_LRS&quot;
# get the connection string and save it as an environment variable
$connectionString = az storage account show-connection-string `
-n $storageAccountName -g $resGroupName `
--query &quot;connectionString&quot; -o tsv
$env:AZURE_STORAGE_CONNECTION_STRING = $connectionString
$containerName = &quot;assets&quot;
az storage container create -n $containerName --public-access off
</code></pre>
<p>And let's make a really simple example website to deploy with an <code>index.html</code> called <code>version1.zip</code></p>
<pre><code class="language-powershell">Write-Output &quot;&lt;h1&gt;Version 1&lt;/h1&gt;&quot; &gt; &quot;index.html&quot;
$zipName = &quot;version1.zip&quot;
Compress-Archive -Path &quot;index.html&quot; -DestinationPath $zipName
</code></pre>
<p>And finally let's again use the Azure CLI to upload <code>version1.zip</code> to blob storage and generate a SAS URL for it. I'm giving mine a five year lifetime in this example. It would appear that the URL needs to be valid <a href="https://github.com/Azure/app-service-announcements-discussions/issues/32#issuecomment-366002976">for as long as you want the site to work</a>, so you should bear that in mind if you choose this technique. Remember that the SAS will be invalidated if you cycle the keys for your storage account. Normally I consider long-lived SAS tokens to be an anti-pattern, but in this case I'm less concerned since application binaries rarely contain very sensitive information.</p>
<pre><code class="language-powershell"># upload the zip to blob storage
$blobName = &quot;version1.zip&quot;
az storage blob upload -c $containerName -f $zipName -n $blobName
# generate a read-only SAS token that expires in 5 years
$expiry = (Get-Date).ToUniversalTime().AddYears(5).ToString(&quot;yyyy-MM-dd\THH\:mm\Z&quot;)
$sas = az storage blob generate-sas -c $containerName -n $blobName `
--permissions r -o tsv `
--expiry $expiry
# construct a SAS URL out of the blob's URL plus the SAS token
$blobUrl = az storage blob url -c $containerName -n $blobName -o tsv
$sasUrl = &quot;$($blobUrl)?$($sas)&quot;
</code></pre>
<h3>Step 3 - Point the Web App at the zip in blob storage</h3>
<p>Setting the app setting ought to be straightforward. The setting name is <code>WEBSITE_RUN_FROM_PACKAGE</code> and the value is the SAS URL we just generated. But due to a nasty escaping issue using the Azure CLI (see <a href="https://github.com/Azure/azure-cli/issues/5228">this issue</a> and <a href="https://github.com/Azure/azure-cli/issues/7147">this issue</a>) we need an ugly fix. (note that this only applies to using the Azure CLI)</p>
<pre><code class="language-powershell"># escape the SAS URL to work
$escapedUrl = $sasUrl.Replace(&quot;&amp;&quot;,&quot;^^^&amp;&quot;)
# set the app setting
az webapp config appsettings set -n $webAppName -g $resGroupName `
--settings &quot;WEBSITE_RUN_FROM_PACKAGE=$escapedUrl&quot;
</code></pre>
<p>And that's it. With that one app setting, we've deployed our application. And if we visit our website, we'll see &quot;Version 1&quot;.</p>
<p>If we were to repeat the process, creating a <code>version2.zip</code> file, uploading it to blob storage, generating a SAS URL, and updating the <code>WEBSITE_RUN_FROM_PACKAGE</code> application setting, then we'd very soon see the new version in place.</p>
<h3>Why bother?</h3>
<p>Now you might be thinking - why go to all this trouble? What was wrong with the <a href="https://docs.microsoft.com/en-us/azure/azure-functions/deployment-zip-push">previous zip deployment API</a>? And of course, the existing zip API still works just fine and you can keep using it if it meets your needs. But there are some benefits to taking the &quot;Run from Package&quot; approach, which you can read about in more detail <a href="https://github.com/Azure/app-service-announcements/issues/84">here</a>, but I'll briefly summarise them:</p>
<ul>
<li>Ability to rapidly switch back to a previous version without needing to re-upload anything. Your blob storage container functions a bit like a Docker container registry, containing versioned artefacts of your web applications.</li>
<li>A much more atomic switchover. Previously your new zip got unzipped over the top of the previous one, meaning that there was a small period during upgrade where your app was taken offline to avoid inconsistency. This approach does still do a site restart, but overall the whole upgrade is much faster.</li>
<li>Much faster cold start performance for Azure Functions running on the consumption plan, especially when the zip contains large number of files (e.g. a Node.js application)</li>
<li>The <code>wwwroot</code> folder is now write-only. This could be interpreted as a disadvantage as there are some applications that write into their own <code>wwwroot</code> folder - e.g. storing user data in <code>App_Data</code>, but this is no longer considered a good practice for scalable cloud applications, and so being denied this ability is a good thing, and improves predictability - you know exactly what code you're running.</li>
</ul>
<h3>What if I don't want to use blob storage?</h3>
<p>Now not everyone will like the idea of needing to point the web app at a blob container, with the inherent possibility that at some point in the future the app could break because someone inadvertently deleted the storage account or cycled the keys.</p>
<p>And &quot;Run from Package&quot; offers a second alternative. With this model, you just set the <code>WEBSITE_RUN_FROM_PACKAGE</code> app setting to the value <code>1</code>. So let's first use the Azure CLI to update our app setting to use this technique:</p>
<pre><code class="language-powershell">az webapp config appsettings set -n $webAppName -g $resGroupName `
--settings &quot;WEBSITE_RUN_FROM_PACKAGE=1&quot;
</code></pre>
<p>Next you need to get your zip file into the <code>D:\home\data\SitePackages</code> folder of your web app and update a <code>packagename.txt</code> file in the same folder to hold the name of the zip file you want to be live. Uploading the zip and editing <code>packagename.txt</code> are both possible with the Kudu REST API, but there's an easier way. When <code>WEBSITE_RUN_FROM_PACKAGE</code> has the value <code>1</code>, whenever you upload a zip file with the zip deployment API, instead of unzipping it's contents to <code>wwwroot</code>, it will instead save it into <code>SitePackages</code> and update <code>packagename.txt</code> for you.</p>
<p>Suppose we do two deployments of our application using this technique</p>
<pre><code class="language-powershell">az webapp deployment source config-zip -n $webAppName `
-g $resGroupName --src version2.zip
az webapp deployment source config-zip -n $webAppName `
-g $resGroupName --src version3.zip
</code></pre>
<p>We'll see that <code>version3.zip</code> is now live, but our <code>SitePackages</code> folder will actually contain both zip files, allowing us to easily switch back if we need to. If we use the Kudu debug console (accessible at https://mywebapp.scm.azurewebsites.net/DebugConsole) to explore what's in <code>SitePackages</code>, here's what we see:</p>
<pre><code>D:\home\data\SitePackages&gt;dir
Volume in drive D is Windows
Volume Serial Number is E859-323E
Directory of D:\home\data\SitePackages
01/14/2019 02:49 PM &lt;DIR&gt; .
01/14/2019 02:49 PM &lt;DIR&gt; ..
01/14/2019 02:47 PM 157 20190114144716.zip
01/14/2019 02:49 PM 157 20190114144929.zip
01/14/2019 02:49 PM 18 packagename.txt
3 File(s) 332 bytes
2 Dir(s) 10,737,258,496 bytes free
D:\home\data\SitePackages&gt;type packagename.txt
20190114144929.zip
</code></pre>
<p>As you can see, the two uploaded zips have been named with timestamps, and <code>packagename.txt</code> has been updated for us. I like the simplicity of being able to just use the zip deployment API to automate this, but if you wanted to be able to automate rolling back to the previous version, there would be a bit more work involved (see my <a href="https://markheath.net/post/deploy-azure-functions-kudu-powershell">previous post</a> for some tips on calling the Kudu REST APIs you'd need to use to automate this).</p>
<h3>Summary</h3>
<p>The new &quot;Run from Package&quot; deployment option offers several benefits over previous techniques for deploying Web Apps and Function Apps, and gives you the choice between two places to store your zip files. You can access my full PowerShell &amp; Azure CLI demo script to try this out for yourself in <a href="https://gist.github.com/markheath/d3749e0ea5c0ae9126d21ef3f9f93c6c">this GitHub Gist</a>. Although I only showed deployment of a very simple static website here, you can use exactly the same technique to deploy any Web App or Function App.</p>
<img src="http://feeds.feedburner.com/~r/markdotnet/~4/FH2j7e0m4jo" height="1" width="1" alt=""/>https://markheath.net/post/run-from-packagehttps://markheath.net/post/managed-identity-key-vault-azure-functionsAccessing Key Vault from Azure Functions using Managed Identities2019-01-08T00:00:00Z2019-01-08T00:00:00ZMark Heathtest@example.com<p>Yesterday, I showed how we can <a href="https://markheath.net/post/deploying-azure-functions-with-azure-cli">deploy Azure Functions with the Azure CLI</a>. Today, I want to build on that and show how we can use the Azure CLI to add a &quot;Managed Service Identity&quot; (apparently now known simply as <a href="https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview">&quot;Managed Identity&quot;</a>) to a Function App, and then use that identity to grant our Function App access to a secret stored in Azure Key Vault.</p>
<p>And again I'll show you how the entire thing can be automated with the Azure CLI.</p>
<h3>Step 1 - Create the Function App</h3>
<p>The first step is to create our Function App, for which we need a Resource Group, a Storage Account, and an App Service Plan. I covered this <a href="https://markheath.net/post/deploying-azure-functions-with-azure-cli">in more detail yesterday</a>, but here's the basic Azure CLI commands to provision a new Function App running on the consumption plan.</p>
<pre><code class="language-powershell"># create a resource group
$resourceGroup = &quot;AzureFunctionsMsiDemo&quot;
$location = &quot;westeurope&quot;
az group create -n $resourceGroup -l $location
# create a storage account
$rand = Get-Random -Minimum 10000 -Maximum 99999
$storageAccountName = &quot;funcsmsi$rand&quot;
az storage account create `
-n $storageAccountName `
-l $location `
-g $resourceGroup `
--sku Standard_LRS
# create a function app
$functionAppName = &quot;funcs-msi-$rand&quot;
az functionapp create `
-n $functionAppName `
--storage-account $storageAccountName `
--consumption-plan-location $location `
--runtime dotnet `
-g $resourceGroup
</code></pre>
<h3>Step 2 - Assign a managed identity</h3>
<p>We can use the <code>az functionapp identity assign</code> command to create a &quot;system assigned&quot; managed identity for this Function App.</p>
<pre><code class="language-powershell">az functionapp identity assign -n $functionAppName -g $resourceGroup
</code></pre>
<p>The response will include the <code>principalId</code> and <code>tenantId</code>, and we can get hold of them later if we need to with the following commands:</p>
<pre><code class="language-powershell">$principalId = az functionapp identity show -n $functionAppName -g $resourceGroup --query principalId -o tsv
$tenantId = az functionapp identity show -n $functionAppName -g $resourceGroup --query tenantId -o tsv
</code></pre>
<p>We can also find this identity in Azure Active Directory with the following commands (Note that the &quot;principal Id is also sometimes called the &quot;object id&quot;):</p>
<pre><code class="language-powershell">az ad sp show --id $principalId
# or
az ad sp list --display-name $functionAppName
</code></pre>
<p>Assigning an identity to our Function App means we'll have two new environment variables <code>MSI_ENDPOINT</code> and <code>MSI_SECRET</code> which enable applications to easy get an authentication token for this identity. However, we won't need to directly use them in this example.</p>
<h3>Step 3 - Create a Key Vault and Store a Secret</h3>
<p>You may already have a Key Vault available to you, but if not, the Azure CLI enables you easily to create one. For the purposes of this demo, let's create a new Key Vault in the same resource group and add a secret to it.</p>
<pre><code class="language-powershell"># Create a key vault
$keyvaultname = &quot;funcsmsi$rand&quot;
az keyvault create -n $keyvaultname -g $resourceGroup
# Save a secret in the key vault
$secretName = &quot;MySecret&quot;
az keyvault secret set -n $secretName --vault-name $keyvaultname `
--value &quot;Super secret value!&quot;
# view the secret
az keyvault secret show -n $secretName --vault-name $keyvaultname
</code></pre>
<p>Each secret has an identifier in the form of a URL, and we'll need it in order to access the secret value. Here's how we can get the identifier for our secret:</p>
<pre><code class="language-powershell">$secretId = az keyvault secret show -n $secretName `
--vault-name $keyvaultname --query &quot;id&quot; -o tsv
</code></pre>
<h3>Step 4 - Grant the Managed Identity Read Access to Key Vault</h3>
<p>By default, the managed identity for our function app cannot access Key Vault. We can grant it access for reading secrets only with the following command:</p>
<pre><code class="language-powershell">az keyvault set-policy -n $keyvaultname -g $resourceGroup `
--object-id $principalId --secret-permissions get
# to see the access policies added:
az keyvault show -n $keyvaultname -g $resourceGroup `
--query &quot;properties.accessPolicies[?objectId == ``$principalId``]&quot;
</code></pre>
<h3>Step 5 - Add Application Settings referencing Key Vault secrets</h3>
<p>To access secrets in Key Vault from our Function App we can create application settings whose value has the form <code>@Microsoft.KeyVault(SecretUri=https://my-key-vault.vault.azure.net/secrets/my-secret/29e8f1b62cb34f3aa40f0757aea0388d)</code>.</p>
<p>Although it is possible to set these secrets using the Azure CLI, it's a pain because it gets into a mess escaping the secret value (or at least does so from PowerShell). I found I needed to use the rather obscure <code>^^</code> escape sequence to ensure that the final <code>)</code> character made it into my function app settings:</p>
<pre><code class="language-powershell">az functionapp config appsettings set -n $functionAppName -g $resourceGroup `
--settings &quot;Secret1=@Microsoft.KeyVault(SecretUri=$secretId^^)&quot;
</code></pre>
<p>When an app setting is defined like this, the Azure Functions runtime will use the Managed Identity to access the Key Vault and read the secret.</p>
<h3>Step 6 - Accessing the secrets in Azure Functions</h3>
<p>Once we've set this all up, an Azure Function can simply access the secret by reading the environment variable with the app setting name. Here's a very simple Azure Function I made to test this that simply allows you to access any environment variable.</p>
<pre><code class="language-cs">[FunctionName(&quot;GetAppSetting&quot;)]
public static async Task&lt;IActionResult&gt; Run(
[HttpTrigger(AuthorizationLevel.Function, &quot;get&quot;, Route = null)] HttpRequest req,
ILogger log)
{
string settingName = req.Query[&quot;name&quot;];
if (string.IsNullOrEmpty(settingName))
return new BadRequestObjectResult(&quot;Please pass a name on the query string&quot;);
log.LogInformation($&quot;Requesting setting {settingName}.&quot;);
var value = Environment.GetEnvironmentVariable(settingName);
return new OkObjectResult($&quot;{settingName}={value}&quot;);
}
</code></pre>
<p>So if you call this function and pass a <code>name</code> query string parameter of <code>Secret1</code>, you'll get the value stored in the Key Vault secret. You can also call it passing a <code>name</code> of <code>MSI_ENDPOINT</code> or <code>MSI_SECRET</code> to see the value of the environment variables that were added as a result of enabling a system managed identity.</p>
<h3>Summary</h3>
<p>Managed Identities simplify the work required to grant your Function Apps the right to access secrets in Key Vault, and the whole process can be automated with the Azure CLI. To read more on using secrets in Key Vault with Azure Functions, check out <a href="https://medium.com/statuscode/getting-key-vault-secrets-in-azure-functions-37620fd20a0b">this article by Jeff Hollan</a>. And you can access my entire <a href="https://github.com/markheath/azure-functions-msi-demo/blob/master/deploy.ps1">demo PowerShell script here</a> which includes the additional steps of deploying the sample Function App, and using Kudu to get the function key so you can even automate calling the Azure Function without needing to go to the portal first to find the function key.</p>
<img src="http://feeds.feedburner.com/~r/markdotnet/~4/SAfgxN7Pi1s" height="1" width="1" alt=""/>https://markheath.net/post/managed-identity-key-vault-azure-functionshttps://markheath.net/post/deploying-azure-functions-with-azure-cliDeploying Azure Functions with the Azure CLI2019-01-07T00:00:00Z2019-01-07T00:00:00ZMark Heathtest@example.com<p>In this post we'll explore how we can use the <a href="https://pluralsight.pxf.io/c/1192349/424552/7490?u=www%2Epluralsight%2Ecom%2Fcourses%2Fazure-cli-getting-started">Azure CLI</a> to deploy an Azure Function App running on the &quot;consumption plan&quot; along with all the associated resources such as a Storage Account and an Application Insights instance.</p>
<p>I'll be using PowerShell as my command prompt, but most of these commands translate very straightforwardly to a Bash shell if you prefer.</p>
<h3>Step 1 - Create a Resource Group</h3>
<p>As always with the Azure CLI, once we've logged in (with <code>az login</code>) and chosen the correct subscription (with <code>az account set -s &quot;MySub&quot;</code>), we should create a resource group to hold the various resources we're going to create.</p>
<pre><code class="language-powershell">$resourceGroup = &quot;AzureFunctionsDemo&quot;
$location = &quot;westeurope&quot;
az group create -n $resourceGroup -l $location
</code></pre>
<h3>Step 2 - Create a Storage Account</h3>
<p>A number of features of Azure Functions work with a Storage Account, so it's a good idea to create a dedicated Storage Account to partner with a function app. Storage Accounts do require unique names as it will form part of their domain name, so I'm using a random number to help pick a suitable name, before creating the storage account using the standard LRS pricing tier.</p>
<pre><code class="language-powershell">$rand = Get-Random -Minimum 10000 -Maximum 99999
$storageAccountName = &quot;funcsdemo$rand&quot;
az storage account create `
-n $storageAccountName `
-l $location `
-g $resourceGroup `
--sku Standard_LRS
</code></pre>
<h3>Step 3 - Create a Function App</h3>
<p>Normally at this point we'd need to create an App Service Plan, but when we're using the consumption pricing tier there's a shortcut we can use, which is to set the <code>--consumption-plan-location</code> parameter when we create the Function App, and we'll automatically get a consumption App Service Plan created for us (with a name like &quot;WestEuropePlan&quot;) in our resource group.</p>
<p>We're going to be using V2 of the Azure Functions Runtime, and so I'll specify that I'm using the <code>dotnet</code> runtime, but you can also set this to <code>node</code> or <code>java</code>.</p>
<pre><code class="language-powershell">$functionAppName = &quot;funcs-demo-$rand&quot;
az functionapp create `
-n $functionAppName `
--storage-account $storageAccountName `
--consumption-plan-location $location `
--runtime dotnet `
-g $resourceGroup
</code></pre>
<h3>Step 4 - Deploy our Function App Code</h3>
<p>Obviously I'm assuming that we have some functions to deploy to the Function App. If we've created a C# Azure Functions project, we can package it up for release by running a <code>dotnet publish</code>, zipping up the resulting folder, and using <code>az functionapp deployment source config-zip</code> to deploy it.</p>
<pre><code class="language-powershell"># publish the code
dotnet publish -c Release
$publishFolder = &quot;FunctionsDemo/bin/Release/netcoreapp2.1/publish&quot;
# create the zip
$publishZip = &quot;publish.zip&quot;
if(Test-path $publishZip) {Remove-item $publishZip}
Add-Type -assembly &quot;system.io.compression.filesystem&quot;
[io.compression.zipfile]::CreateFromDirectory($publishFolder, $publishZip)
# deploy the zipped package
az functionapp deployment source config-zip `
-g $resourceGroup -n $functionAppName --src $publishZip
</code></pre>
<h3>Step 5 - Configure Application Insights</h3>
<p>Azure Functions offers excellent monitoring via Application Insights, so it makes sense to turn this on for all deployments. Unfortunately, the Azure CLI currently <a href="https://github.com/Azure/azure-cli/issues/5543">does not support</a> creating Application Insights directly, so we have to jump through a few hoops.</p>
<p>We'll use the <code>az resource create</code> command to create an App Insights instance, and since it's rather tricky to successfully pass correctly escaped JSON as a parameter in PowerShell, I'm creating a temporary JSON file:</p>
<pre><code class="language-powershell">$propsFile = &quot;props.json&quot;
'{&quot;Application_Type&quot;:&quot;web&quot;}' | Out-File $propsFile
$appInsightsName = &quot;funcsmsi$rand&quot;
az resource create `
-g $resourceGroup -n $appInsightsName `
--resource-type &quot;Microsoft.Insights/components&quot; `
--properties &quot;@$propsFile&quot;
Remove-Item $propsFile
</code></pre>
<p>Now we've created the Application Insights instance, we need to get hold of the instrumentation key, which we can do with this command:</p>
<pre><code class="language-powershell">$appInsightsKey = az resource show -g $resourceGroup -n $appInsightsName `
--resource-type &quot;Microsoft.Insights/components&quot; `
--query &quot;properties.InstrumentationKey&quot; -o tsv
</code></pre>
<p>And finally, we set the instrumentation key as an application setting on our Function App with the <code>az functionapp config appsettings set</code> command:</p>
<pre><code class="language-powershell">az functionapp config appsettings set -n $functionAppName -g $resourceGroup `
--settings &quot;APPINSIGHTS_INSTRUMENTATIONKEY=$appInsightsKey&quot;
</code></pre>
<h3>Step 6 - Configure Application Settings</h3>
<p>Optionally at this point, we may wish to configure some application settings, such as connection strings to other services. These can be configured with the same <code>az functionapp config appsettings set</code> command we just used (although watch out for some <a href="https://github.com/Azure/azure-cli/issues/7147">nasty escaping gotchas</a> if your setting values contain certain characters).</p>
<pre><code class="language-powershell">az functionapp config appsettings set -n $functionAppName -g $resourceGroup `
--settings &quot;MySetting1=Hello&quot; &quot;MySetting2=World&quot;
</code></pre>
<h3>Step 7 - Configure a Daily Use Quota</h3>
<p>Another optional feature you might want to consider setting up is a daily usage quota. One of the great things about the serverless Azure Functions consumption plan is that it offers near-infinite scale to handle huge spikes in load. But that does also leave you open to a &quot;denial of wallet attack&quot; where due to an external DoS attack or a coding mistake, you end up with a huge bill because your function app scaled out to hundreds of instances. The daily quota allows you to set a limit in terms of &quot;Gigabyte seconds&quot; (GB-s), which you might want to set just to be on the safe side when you're experimenting. For a production system, I'd probably rather leave this quota off (or set very high), and configure alerts instead to tell me when my usage is much higher than normal.</p>
<p>Here's the command that sets the daily usage quota to 50000 GB-s:</p>
<pre><code class="language-powershell">az functionapp update -g $resourceGroup -n $functionAppName `
--set dailyMemoryTimeQuota=50000
</code></pre>
<h3>Summary</h3>
<p>The Azure CLI provides us with an easy way to deploy and manage our Azure Function apps. Of course, you can also create an ARM template that contains the same resources, and deploy that with the CLI. Personally I find the CLI great when I'm experimenting and prototyping, and when I've got an application that's a bit more stable and ready for production, I might create an ARM template to allow deploying the whole thing in one go.</p>
<img src="http://feeds.feedburner.com/~r/markdotnet/~4/by6iQoGYDKQ" height="1" width="1" alt=""/>https://markheath.net/post/deploying-azure-functions-with-azure-clihttps://markheath.net/post/markdown-html-yaml-front-matterRendering Markdown to HTML and Parsing YAML Front Matter in C#2019-01-04T00:00:00Z2019-01-04T00:00:00ZMark Heathtest@example.com<p>A year ago I <a href="https://markheath.net/post/aspnet-core-blog-rewrite">rewrote this blog in ASP.NET Core</a>. One of the goals I had was to be able to transition to writing all my posts in Markdown as I wanted to get away from relying on the obsolete Windows Live Writer and simply use VS Code for editing posts.</p>
<p>However, I needed to be able to store each blog post as a Markdown file, and for that I decided to use <a href="https://jekyllrb.com/docs/front-matter/">&quot;YAML front matter&quot;</a> as a way to store metadata such as the post title and categories.</p>
<p>So a the contents of a typical blog post file look something like:</p>
<pre><code>---
title: Welcome!
categories: [ASP.NET Core, C#]
---
Welcome to my new blog! I built it with:
- C#
- ASP.NET Core
- StackOverflow
</code></pre>
<h3>Parsing YAML Front Matter with YamlDotNet</h3>
<p>First of all, to parse the YAML front matter, I used the <a href="https://www.nuget.org/packages/YamlDotNet/">YamlDotNet NuGet package</a>. It's a little bit fiddly, but you can use the <code>Parser</code> to find the front matter (it comes after a <code>StreamStart</code> and a <code>DocumentStart</code>), and then use an <code>IDeserializer</code> to deserialize the YAML into a suitable class with properties matching the YAML. In my case, the <code>Post</code> class supports setting many properties including the post title, categories, publication date, and even a list of comments, but for my blog I keep things simple and usually only set the title and categories (I use a file name convention to indicate the publication date).</p>
<pre><code class="language-cs">using YamlDotNet.Serialization;
using YamlDotNet.Serialization.NamingConventions;
using YamlDotNet.Core;
using YamlDotNet.Core.Events;
// ...
var yamlDeserializer = new DeserializerBuilder()
.WithNamingConvention(new CamelCaseNamingConvention())
.Build();
var text = File.ReadAllText(blogPostMarkdownFile);
using (var input = new StringReader(text))
{
var parser = new Parser(input);
parser.Expect&lt;StreamStart&gt;();
parser.Expect&lt;DocumentStart&gt;();
var post = yamlDeserializer.Deserialize&lt;Post&gt;(parser);
parser.Expect&lt;DocumentEnd&gt;();
}
</code></pre>
<h3>Rendering HTML with MarkDig</h3>
<p>To convert the Markdown into HTML, I used the superb <a href="https://github.com/lunet-io/markdig">MarkDig library</a>. This not only makes it super easy to convert basic Markdown to HTML, but supports several useful extensions. The library author, Alexandre Mutel, is very responsive to pull requests, so I was able to contribute a couple of minor improvements myself to add some features I wanted.</p>
<p>I created a basic <code>MarkdownRenderer</code> class that renders Markdown using the settings I want for my blog. A couple of things of note. First of all, you'll notice in <code>CreateMarkdownPipeline</code> that I've enabled a bunch of helpful extensions that are available out of the box. These gave me pretty much all the support I needed for things like syntax highlighting, tables, embedded YouTube videos, etc. I'm telling it to expect YAML front matter, so I don't need to strip off the YAML before passing it to the renderer. I needed to add a missing mime type, so I've shown how that can be done, even though it's included now. And the most hacky thing I needed to do was to ensure that generated tables had a specific class I wanted to be present for my CSS styling to work properly (I guess there may be an easier way to achieve this now).</p>
<p>Once the <code>MarkdownPipeline</code> has been constructed, we use a <code>MarkdownParser</code> in conjunction with a <code>HtmlRenderer</code> to parse the Markdown and then render it as HTML. One of the features I contributed to MarkDig was the ability to turn relative links into absolute ones. This is needed for my RSS feed, which needs to use absolute links, while my posts just use relative ones.</p>
<p>Here's the code for my <code>MarkdownRenderer</code> which you can adapt for your own needs:</p>
<pre><code class="language-cs">using Markdig;
using Markdig.Syntax;
using Markdig.Renderers.Html;
using Markdig.Extensions.MediaLinks;
using Markdig.Parsers;
using Markdig.Renderers;
// ...
public class MarkdownRenderer
{
private readonly MarkdownPipeline pipeline;
public MarkdownRenderer()
{
pipeline = CreateMarkdownPipeline();
}
public string Render(string markdown, bool absolute)
{
var writer = new StringWriter();
var renderer = new HtmlRenderer(writer);
if(absolute) renderer.BaseUrl = new Uri(&quot;https://markheath.net&quot;);
pipeline.Setup(renderer);
var document = MarkdownParser.Parse(markdown, pipeline);
renderer.Render(document);
writer.Flush();
return writer.ToString();
}
private static MarkdownPipeline CreateMarkdownPipeline()
{
var builder = new MarkdownPipelineBuilder()
.UseYamlFrontMatter()
.UseCustomContainers()
.UseEmphasisExtras()
.UseGridTables()
.UseMediaLinks()
.UsePipeTables()
.UseGenericAttributes(); // Must be last as it is one parser that is modifying other parsers
var me = builder.Extensions.OfType&lt;MediaLinkExtension&gt;().Single();
me.Options.ExtensionToMimeType[&quot;.mp3&quot;] = &quot;audio/mpeg&quot;; // was missing (should be in the latest version now though)
builder.DocumentProcessed += document =&gt; {
foreach(var node in document.Descendants())
{
if (node is Markdig.Syntax.Block)
{
if (node is Markdig.Extensions.Tables.Table)
{
node.GetAttributes().AddClass(&quot;md-table&quot;);
}
}
}
};
return builder.Build();
}
}
</code></pre>
<img src="http://feeds.feedburner.com/~r/markdotnet/~4/bRD9ORDPRQE" height="1" width="1" alt=""/>https://markheath.net/post/markdown-html-yaml-front-matterhttps://markheath.net/post/migrating-to-new-servicebus-sdkMigrating to the New Azure Service Bus SDK2019-01-03T00:00:00Z2019-01-03T00:00:00ZMark Heathtest@example.com<p>Just over a year ago, a new .NET SDK for Azure Service Bus was <a href="https://azure.microsoft.com/en-us/blog/azure-service-bus-net-standard-client-ga/">released</a>. This replaces the old <a href="https://www.nuget.org/packages/WindowsAzure.ServiceBus/">WindowsAzure.ServiceBus</a> NuGet package with the <a href="https://www.nuget.org/packages/Microsoft.Azure.ServiceBus/">Microsoft.Azure.ServiceBus</a> NuGet package.</p>
<p>You're not forced to change over to the new SDK if you don't want to. The old one still works just fine, and even continues to get updates. However, there are some benefits to switching over, so in this post I'll highlight the key differences and some potential gotchas to take into account if you do want to make the switch.</p>
<h3>Benefits of the new SDK</h3>
<p>First of all, why did we even need a new SDK? Well, the old one supported .NET 4.6 only, while the new one is <strong>.NET Standard 2.0 compatible</strong>, making it usable cross-platform in .NET core applications. It's also <strong>open source</strong>, available at <a href="https://github.com/Azure/azure-service-bus-dotnet">https://github.com/Azure/azure-service-bus-dotnet</a>, meaning you can easily examine the code, submit issues and pull requests.</p>
<p>It has a <strong>plugin architecture</strong>, supporting custom plugins for things like message compression or attachments. There are a <a href="https://github.com/Azure/azure-service-bus-dotnet-plugins">few useful plugins</a> already available. We encrypt all our messages with Azure Key Vault before sending them to Service Bus, so I'm looking forward to using the plugin architecture to simplify that code.</p>
<p>On top of that, the API has generally been cleaned up and improved, and its very much the future of the Azure Service Bus SDK.</p>
<h3>Default transport type</h3>
<p>One of the first gotchas I ran into was that there is a new default &quot;transport type&quot;. The old SDK by default used what it called <a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.servicebus.messaging.transporttype?view=azure-dotnet#Microsoft_ServiceBus_Messaging_TransportType_Amqp">&quot;NetMessaging&quot;</a>, a proprietary Azure Service Bus protocol, even though the recommended option was the industry standard <a href="https://en.wikipedia.org/wiki/Advanced_Message_Queuing_Protocol">AMQP</a>.</p>
<p>The new SDK however <a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.azure.servicebus.transporttype?view=azure-dotnet">defaults to AMQP</a> over port 5671. This was blocked by my work firewall, so I had to switch to the other option of AMQP over WebSockets which uses port 443. If you need to configure this option, append <code>;TransportType=AmqpWebSockets</code> to the end of your connection string.</p>
<p>One unfortunate side-effect of this switch from the <code>NetMessaging</code> protocol to AMQP is the performance of batching. I blogged a while back about the <a href="https://markheath.net/post/speed-up-azure-service-bus-with-batching">dramatic speed improvements available by sending and receiving messages in batches</a>. Whilst <em>sending</em> batches of messages with AMQP seems to have similar performance, when you attempt to <em>receive</em> batches, with AMQP you may get batches significantly smaller than the batch size you request, which slows things down considerably. The explanation for this is <a href="https://github.com/Azure/azure-service-bus-dotnet/issues/441">here</a>, and the issue can be mitigated somewhat by setting the <a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.servicebus.messaging.messagereceiver.prefetchcount?view=azure-dotnet"><code>MessageReceiver.PrefetchCount</code></a> property to a suitably large value.</p>
<p>Here's some simple code you can use to check out the performance of batch sending/receiving with the new SQK. It also shows off the basic operation of the <code>MessageSender</code> and <code>MessageReceiver</code> classes in the new SDK, along with the <code>ManagementClient</code> which allows us to create and delete queues.</p>
<pre><code class="language-cs">string connectionString = // your connection string - remember to add ;TransportType=AmqpWebSockets if port 5671 is blocked
const string queueName = &quot;MarkHeathTestQueue&quot;;
// PART 1 - CREATE THE QUEUE
var managementClient = new ManagementClient(connectionString);
if (await managementClient.QueueExistsAsync(queueName))
{
// ensure we start the test with an empty queue
await managementClient.DeleteQueueAsync(queueName);
}
await managementClient.CreateQueueAsync(queueName);
// PART 2 - SEND A BATCH OF MESSAGES
const int messages = 1000;
var stopwatch = new Stopwatch();
var client = new QueueClient(connectionString, queueName);
stopwatch.Start();
await client.SendAsync(Enumerable.Range(0, messages).Select(n =&gt;
{
var body = $&quot;Hello World, this is message {n}&quot;;
var message = new Message(Encoding.UTF8.GetBytes(body));
message.UserProperties[&quot;From&quot;] = &quot;Mark Heath&quot;;
return message;
}).ToList());
Console.WriteLine($&quot;{stopwatch.ElapsedMilliseconds}ms to send {messages} messages&quot;);
stopwatch.Reset();
// PART 3 - RECEIVE MESSAGES
stopwatch.Start();
int received = 0;
var receiver = new MessageReceiver(connectionString, queueName);
receiver.PrefetchCount = 1000; // https://github.com/Azure/azure-service-bus-dotnet/issues/441
while (received &lt; messages)
{
// unlike the old SDK which picked up the whole thing in 1 batch, this will typically pick up batches in the size range 50-200
var rx = (await receiver.ReceiveAsync(messages, TimeSpan.FromSeconds(5)))?.ToList();
Console.WriteLine($&quot;Received a batch of {rx.Count}&quot;);
if (rx?.Count &gt; 0)
{
// complete a batch of messages using their lock tokens
await receiver.CompleteAsync(rx.Select(m =&gt; m.SystemProperties.LockToken));
received += rx.Count;
}
}
Console.WriteLine($&quot;{stopwatch.ElapsedMilliseconds}ms to receive {received} messages&quot;);
</code></pre>
<h3>Management Client</h3>
<p>Another change in the new SDK is that instead of the old <code>NamespaceManager</code>, we have <code>ManagementClient</code>. Many of the method names are the same or very similar so it isn't too hard to port code over.</p>
<p>One gotcha I ran into is that <code>DeleteQueueAsync</code> (and the equivalent topic and subscription methods) now throw <code>MessagingEntityNotFoundException</code> if you try to delete something that doesn't exist.</p>
<h3>BrokeredMessage replaced by Message</h3>
<p>The old SDK used a class called <a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.servicebus.messaging.brokeredmessage?view=azure-dotnet"><code>BrokeredMessage</code></a> to represent a message, whereas now it's just <a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.azure.servicebus.message?view=azure-dotnet"><code>Message</code></a>.</p>
<p>It's had a bit of a reorganize, so things like <code>DeliveryCount</code> and <code>LockToken</code> are now found in <a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.azure.servicebus.message.systempropertiescollection?view=azure-dotnet"><code>Message.SystemProperties</code></a>. Custom message metadata is stored in <code>UserProperties</code> instead of <code>Properties</code>. Also, instead of providing the message body as a <code>Stream</code>, it is now a <code>byte[]</code>, which makes more sense.</p>
<p>Another significant change is that <code>BrokeredMessage</code> used to have convenience methods like <code>CompleteAsync</code>, <code>AbandonAsync</code>, <code>RenewLockAsync</code> and <code>DeadLetterAsync</code>. You now need to make use of the <code>ClientEntity</code> to perform these actions (with the exception of <code>RenewLockAsync</code> to be discussed shortly).</p>
<h3>ClientEntity changes</h3>
<p>The new SDK retains the concept of a base <a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.azure.servicebus.cliententity?view=azure-dotnet"><code>ClientEntity</code></a> which has derived classes such as <code>QueueClient</code>, <code>TopicClient</code>, <code>SubscriptionClient</code> etc. It's here that you'll find the <code>CompleteAsync</code>, <code>AbandonAsync</code>, and <code>DeadLetterAsync</code> methods, but one conspicuous by its absence is <code>RenewLockAsync</code>.</p>
<p>This means that if you're using <a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.azure.servicebus.queueclient.registermessagehandler?view=azure-dotnet"><code>QueueClient.RegisterMessageHandler</code></a> (previously called <a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.servicebus.messaging.queueclient.onmessage?view=azure-dotnet"><code>QueueClient.OnMessage</code></a>) or similar to handle messages, you don't have a way of renewing the lock for longer than the <code>MaxAutoRenewDuration</code> duration specified in <code>MessageHandlerOptions</code> (which used to be called <code>OnMessageOptions.AutoRenewTimeout</code>). I know that is a little bit of an edge case, but we were relying on being able to call <code>BrokeredMessage.RenewLockAsync</code> in a few places to extend the timeout further. With the new SDK, the ability to renew a lock is only available if you are using <code>MessageReceiver</code>, which has a <a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.azure.servicebus.core.messagereceiver.renewlockasync?view=azure-dotnet"><code>RenewLockAsync</code> method</a>.</p>
<p>A few other minor changes that required a bit of code re-organization were the fact that old <code>Close</code> methods are now <code>CloseAsync</code>, meaning that it is trickier to use the <code>Dispose</code> pattern. There is no longer a <a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.servicebus.messaging.cliententity.abort?view=azure-dotnet"><code>ClientEntity.Abort</code> method</a> - presumably you now just call <code>CloseAsync</code> to shut down the message handling pump. And when you <a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.azure.servicebus.messagehandleroptions.-ctor?view=azure-dotnet">create <code>MessageHandlerOptions</code></a> you are required to provide an exception handler.</p>
<h3>Summary</h3>
<p>The new Azure Service Bus SDK offers lots of improvements over the old one, and the transition isn't too difficult, but there are a few gotchas to be aware of and I've highlighted some of the ones that I ran into. Hopefully this will be of use to you if you're planning to upgrade.</p>
<img src="http://feeds.feedburner.com/~r/markdotnet/~4/v52Kq5I6ujo" height="1" width="1" alt=""/>https://markheath.net/post/migrating-to-new-servicebus-sdkhttps://markheath.net/post/2018-in-review2018 in Review2018-12-31T00:00:00Z2018-12-31T00:00:00ZMark Heathtest@example.com<p>At the end of each year I like to look back and take stock of what I've done in the previous year (I've done this a few times now: <a href="https://markheath.net/post/2017-in-review">2017</a>,<a href="https://markheath.net/post/2016-in-review">2016</a>,<a href="https://markheath.net/post/2015-in-review">2015</a>, <a href="https://markheath.net/post/2013-in-review">2013</a>).
2018 was an exciting year for me, featuring a number of &quot;firsts&quot;.</p>
<h3>I visited America!</h3>
<p>Undoubtedly one of my highlights of the year was my first ever visit to the USA for the Microsoft MVP Summit. I had an amazing time there, making many new friends and learning loads about what's new in the world of developer tools and Azure. I'm very grateful to Microsoft for letting me be part of this program, to my employer NICE for letting me have the week off work and covering my flights, and to my wife for freeing me to go for a week while she looked after our five children! For a flavour of what goes on at the MVP Summit check out this <a href="https://medium.com/@davidpine7/microsoft-mvp-global-summit-2018-a362e88f8fb9">great writeup</a> from David Pine.</p>
<p><img src="https://cdn-images-1.medium.com/max/2000/1*qrRmnAsvwujTLfKLVTee4w.jpeg" alt="MVP Summit" /></p>
<p>I was also very privileged to be re-awarded as an MVP in July, which means I'm going to make my second visit to Redmond in 2019 which I'm really looking forward to.</p>
<h3>I spoke at some conferences!</h3>
<p>Another huge first for me was speaking at two developer conferences. I've always done lots of technical talks, but that has generally been in the context of my work, or local user groups. After being rejected from the first few conferences I submitted to, I found myself accepted at two in short proximity. First, <a href="https://markheath.net/post/prognet-2018-durable-functions">I spoke on Azure Durable Functions at ProgNET</a> in London, and then <a href="https://markheath.net/post/linq-stinks">on LINQ at the inaugural Techorama Netherlands conference</a>. Both conferences were really well run, and as always, were wonderful opportunities to make new friends.</p>
<p>Of course, I still gave a number of local user group talks, including</p>
<ul>
<li><a href="https://www.meetup.com/DeveloperSouthCoast/events/255191019/">Durable Functions at Developer South Coast</a>,</li>
<li><a href="https://www.meetup.com/Azure-Thames-Valley/events/256435361">Containers on Azure at Azure Thames Valley</a>,</li>
<li><a href="https://www.meetup.com/Docker-Hampshire/events/246914331/">Docker on Azure at Docker Southampton</a> and</li>
<li><a href="https://www.meetup.com/devopsoxford/events/246748315/">Technical Debt at DevOps Oxford</a></li>
</ul>
<p>I've submitted some more conference talk proposals for 2019, and so hopefully I'll have some news to share in the near future. I'm also open to speaking at local user groups, especially in the south of England, so feel free to <a href="https://twitter.com/mark_heath">reach out</a> to me if you'd like me to visit your user group.</p>
<h3>I published four Azure related Pluralsight courses!</h3>
<p>The two main technologies I've been focusing on this year are Azure Functions and Containers on Azure, and that's reflected in the four Pluralsight courses I released:</p>
<ul>
<li><a href="https://pluralsight.pxf.io/c/1192349/424552/7490?u=www%2Epluralsight%2Ecom%2Fcourses%2Fmicrosoft-azure-serverless-functions-create">Microsoft Azure Developer: Create Serverless Functions</a></li>
<li><a href="https://pluralsight.pxf.io/c/1192349/424552/7490?u=www%2Epluralsight%2Ecom%2Fcourses%2Fmicrosoft-azure-containers-deploying-managing">Microsoft Azure Developer: Deploying and Managing Containers</a></li>
<li><a href="https://pluralsight.pxf.io/c/1192349/424552/7490?u=www%2Epluralsight%2Ecom%2Fcourses%2Fazure-durable-functions-fundamentals">Azure Durable Functions Fundamentals</a></li>
<li><a href="https://pluralsight.pxf.io/c/1192349/424552/7490?u=www%2Epluralsight%2Ecom%2Fcourses%2Fazure-container-instances-getting-started">Azure Container Instances: Getting Started</a></li>
</ul>
<p>One great thing about the two &quot;Microsoft Azure Developer&quot; courses is that they are being made available to anyone to watch for free, even if they haven't got a Pluralsight subscription, as part of the excellent <a href="https://docs.microsoft.com/en-us/learn/">Microsoft Learn</a> initiative.</p>
<p>I've not finalised my plans for Pluralsight courses in 2019 yet, but expect plenty more Azure content, and I'm also hoping to update my <a href="https://pluralsight.pxf.io/c/1192349/424552/7490?u=www%2Epluralsight%2Ecom%2Fcourses%2Fazure-functions-fundamentals">Azure Functions Fundamentals</a> course to reflect the updates to the platform since I first released it.</p>
<h3>I created lots of blogs, videos and GitHub projects!</h3>
<p>Wherever possible I like to share the things I'm learning for free, and so this year my contribution to the community this year included:</p>
<ul>
<li><strong>75</strong> <a href="https://markheath.net/archive">blog posts</a></li>
<li><strong>28</strong> <a href="https://www.youtube.com/channel/UChV2-HyJ9XzsKLQRztd7Pmw">YouTube videos</a></li>
<li><strong>476</strong> <a href="https://github.com/markheath?tab=overview&amp;from=2018-12-01&amp;to=2018-12-31">commits to open source GitHub projects</a></li>
</ul>
<p>The major topics I covered were of course Azure Functions (especially Durable Functions) and Containers on Azure, but also my Techorama talk made me revisit LINQ and produce <a href="https://markheath.net/category/morelinq">a series of tutorials on MoreLINQ</a>.</p>
<p>There were a few things I sadly didn't find as much time as I hoped for. Though I've done a lot of prototyping of a .NET Standard version of NAudio, there are a couple of tricky decisions around UWP support I haven't yet decided how to resolve. And although I made a good start on this year's <a href="https://adventofcode.com/">Advent of Code</a>, this December was a little bit too full for me to continue past day 14.</p>
<h3>What else?</h3>
<p>A few other things worth noting from this year. In my day job, working with NICE as a software architect, the <a href="https://www.nice.com/protecting/public-safety/nice-investigate">software we're building to help police forces manage digital evidence</a> is being really well received by several forces who are at various stages of adopting it. This project promises to keep me busy staying on top of best architectural practices for Azure, and will no doubt drive much of my learning focus for the next year.</p>
<p>And of course life is much more than programming. One of the reasons I try to keep travel to a minimum with work is to spend as much time as possible with my family. This year I've taken my eldest son around various university open days, as well as taught various children guitar, football, computer maintenance, bike riding, and (of course) programming (we're currently enjoying the <a href="https://getcodingkids.com/">Get Coding</a> books).</p>
<p>I also love being part of my local church community, where I have been teaching my way through <a href="https://en.wikipedia.org/wiki/Twelve_Minor_Prophets">the minor prophets</a>, as well as enjoying playing more electric guitar and piano with a great bunch of musicians. One of the things I appreciate most about my church is the great diversity: it's made up of people of all ages, nationalities, social and education backgrounds, and is intentional about serving and welcoming those who are disadvantaged in various ways. So it's been great to see a big focus in the world of programming over the last year at improving diversity and inclusion.</p>
<p>Finally, I want to say a big thank you to everyone who's supported me this year, watching my courses, attending my talks, reading my blog posts. It's been great to meet many of you in person and hopefully I'll connect with more of you in 2019.</p>
<img src="http://feeds.feedburner.com/~r/markdotnet/~4/FNSjqwXA_hU" height="1" width="1" alt=""/>https://markheath.net/post/2018-in-reviewhttps://markheath.net/post/limit-audio-naudioHow to Limit Audio Files with NAudio2018-12-21T00:00:00Z2018-12-21T00:00:00ZMark Heathtest@example.com<p>A while ago I wrote an article explaining how you can <a href="https://markheath.net/post/normalize-audio-naudio">normalize audio using NAudio</a>. Normalizing is a way of increasing the volume of an audio file by the largest possible amount without clipping.</p>
<p>However, I also mentioned in that article that in many cases, <a href="https://en.wikipedia.org/wiki/Dynamic_range_compression">dynamic range compression</a> is a better option. That's because in many audio files there are often a few stray peaks that are very loud which can make it impossible to bring up the overall gain without clipping. What we'd like to do is <em>reduce</em> the level of those peaks, giving us more headroom to increase the gain of the rest of the audio.</p>
<p>In this post, I'll show how we can implement a simple &quot;limiter&quot; which is essentially a compressor with a very steep compression ratio, that can be used on a variety of types of audio, including spoken word, to achieve a nice consistent volume level throughout.</p>
<h3>An audio effects framework</h3>
<p>NAudio has an interface called <code>ISampleProvider</code> which is ideal for our limiter effect, but I've created an <code>Effect</code> base class to make it simpler to implement effects.</p>
<p>The <code>Effect</code> class implements <code>ISampleProvider</code>, and uses the decorator pattern to take the source <code>ISampleProvider</code> in. It requires derived effect classes to implement a method called <code>Sample</code> which is called for every stereo or mono sample frame. There are also some optional methods you can implement, such as <code>ParamsChanged</code> which is called whenever the values of the effect parameters are modified to allow recalculation of any constants, and <code>Block</code> which is called before each block of samples is processed, for efficiency.</p>
<p>The design of this is inspired by the <a href="https://www.reaper.fm/sdk/js/js.php">JSFX effects framework</a> that is part of the REAPER DAW. It's essentially a DSL for implementing effects, which I've found very useful for implementing my own custom effects such as this <a href="https://forum.cockos.com/showthread.php?t=29349&amp;highlight=trance+gate+js">trance gate</a>. To make it easier to port a JSFX effect to C#, I've added some additional static helper methods to match the method names used by JSFX.</p>
<pre><code class="language-cs">abstract class Effect : ISampleProvider
{
private ISampleProvider source;
private bool paramsChanged;
public float SampleRate { get; set; }
public Effect(ISampleProvider source)
{
this.source = source;
SampleRate = source.WaveFormat.SampleRate;
}
protected void RegisterParameters(params EffectParameter[] parameters)
{
paramsChanged = true;
foreach(var param in parameters)
{
param.ValueChanged += (s, a) =&gt; paramsChanged = true;
}
}
protected abstract void ParamsChanged();
public int Read(float[] samples, int offset, int count)
{
if (paramsChanged)
{
ParamsChanged();
paramsChanged = false;
}
var samplesAvailable = source.Read(samples, offset, count);
Block(samplesAvailable);
if (WaveFormat.Channels == 1)
{
for (int n = 0; n &lt; samplesAvailable; n++)
{
float right = 0.0f;
Sample(ref samples[n], ref right);
}
}
else if (WaveFormat.Channels == 2)
{
for (int n = 0; n &lt; samplesAvailable; n+=2)
{
Sample(ref samples[n], ref samples[n+1]);
}
}
return samplesAvailable;
}
public WaveFormat WaveFormat { get { return source.WaveFormat; } }
public abstract string Name { get; }
// helper base methods these are primarily to enable derived classes to use a similar
// syntax to REAPER's JS effects
protected const float log2db = 8.6858896380650365530225783783321f; // 20 / ln(10)
protected const float db2log = 0.11512925464970228420089957273422f; // ln(10) / 20
protected static float min(float a, float b) { return Math.Min(a, b); }
protected static float max(float a, float b) { return Math.Max(a, b); }
protected static float abs(float a) { return Math.Abs(a); }
protected static float exp(float a) { return (float)Math.Exp(a); }
protected static float sqrt(float a) { return (float)Math.Sqrt(a); }
protected static float sin(float a) { return (float)Math.Sin(a); }
protected static float tan(float a) { return (float)Math.Tan(a); }
protected static float cos(float a) { return (float)Math.Cos(a); }
protected static float pow(float a, float b) { return (float)Math.Pow(a, b); }
protected static float sign(float a) { return Math.Sign(a); }
protected static float log(float a) { return (float)Math.Log(a); }
protected static float PI { get { return (float)Math.PI; } }
/// &lt;summary&gt;
/// called before each block is processed
/// &lt;/summary&gt;
/// &lt;param name=&quot;samplesblock&quot;&gt;number of samples in this block&lt;/param&gt;
public virtual void Block(int samplesblock)
{
}
/// &lt;summary&gt;
/// called for each sample
/// &lt;/summary&gt;
protected abstract void Sample(ref float spl0, ref float spl1);
public override string ToString()
{
return Name;
}
}
</code></pre>
<p>You'll also notice that there is support for the concept of an <code>EffectParameter</code>. This is just a way of allowing users to adjust a parameter between a minimum and maximum and making sure that the effect is notified of any parameter changes.</p>
<pre><code class="language-cs">class EffectParameter
{
public float Min {get;}
public float Max {get;}
public string Description {get;}
private float currentValue;
public event EventHandler ValueChanged;
public float CurrentValue
{
get { return currentValue;}
set
{
if (value &lt; Min || value &gt; Max)
throw new ArgumentOutOfRangeException(nameof(CurrentValue));
if (currentValue != value)
ValueChanged?.Invoke(this, EventArgs.Empty);
currentValue = value;
}
}
public EffectParameter(float defaultValue, float minimum, float maximum, string description)
{
Min = minimum;
Max = maximum;
Description = description;
CurrentValue = defaultValue;
}
}
</code></pre>
<h3>A simple limiter</h3>
<p>For this example, the limiter I've chosen to port is one shipped with REAPER and created by Schwa, who's the author of a whole host of super useful effects. This is nice and simple and the only modification I made was to allow larger gain boost values and set the brickwall default to -0.1dB.</p>
<pre><code class="language-cs">class SoftLimiter : Effect
{
public override string Name =&gt; &quot;Soft Clipper/ Limiter&quot;;
public EffectParameter Boost { get; } = new EffectParameter(0f, 0f, 18f, &quot;Boost&quot;);
public EffectParameter Brickwall { get; } = new EffectParameter(-0.1f, -3.0f, 1f, &quot;Output Brickwall(dB)&quot;);
public SoftLimiter(ISampleProvider source):base(source)
{
RegisterParameters(Boost, Brickwall);
}
private float amp_dB = 8.6562f;
private float baseline_threshold_dB = -9f;
private float a = 1.017f;
private float b = -0.025f;
private float boost_dB;
private float limit_dB;
private float threshold_dB;
protected override void ParamsChanged()
{
boost_dB = Boost.CurrentValue;
limit_dB = Brickwall.CurrentValue;
threshold_dB = baseline_threshold_dB + limit_dB;
}
protected override void Sample(ref float spl0, ref float spl1)
{
var dB0 = amp_dB * log(abs(spl0)) + boost_dB;
var dB1 = amp_dB * log(abs(spl1)) + boost_dB;
if (dB0 &gt; threshold_dB)
{
var over_dB = dB0 - threshold_dB;
over_dB = a * over_dB + b * over_dB * over_dB;
dB0 = min(threshold_dB + over_dB, limit_dB);
}
if (dB1 &gt; threshold_dB)
{
var over_dB = dB1 - threshold_dB;
over_dB = a * over_dB + b * over_dB * over_dB;
dB1 = min(threshold_dB + over_dB, limit_dB);
}
spl0 = exp(dB0 / amp_dB) * sign(spl0);
spl1 = exp(dB1 / amp_dB) * sign(spl1);
}
}
</code></pre>
<h3>Using the limiter</h3>
<p>Using the limiter couldn't be easier. In this example we use an <code>AudioFileReader</code> to read the input file (this supports multiple file types including WAV, MP3 etc). Next we create an instance of <code>SoftLimiter</code> and set the <code>Boost</code> parameter to the amount of boost we want. Here I'm asking for 12dB of gain. Essentially this means that any audio below -12dB will be amplified without clipping, and the soft clipping will be applied to any audio above -12dB.</p>
<p>Finally we use <code>WaveFileWriter.CreateWaveFile16</code> to write the limited audio into a 16 bit WAV file. Obviously you can use other NAudio supported output file formats if you want, such as using the <code>MediaFoundationEncoder</code> for MP3.</p>
<pre><code class="language-cs">var inPath = @&quot;C:\Users\mheath\Documents\my-input-file.wav&quot;;
var outPath = @&quot;C:\Users\mheath\Documents\my-output-file.wav&quot;;
using (var reader = new AudioFileReader(inPath))
{
var limiter = new SoftLimiter(reader);
limiter.Boost.CurrentValue = 12;
WaveFileWriter.CreateWaveFile16(outPath, limiter);
}
</code></pre>
<h3>Summary</h3>
<p>With a basic effects framework in place, its not too hard to port an existing limiter algorithm from another language into C# and use it with NAudio. If you'd like to see more examples of effects ported to NAudio, take a look at <a href="https://github.com/markheath/skypevoicechanger/tree/master/SkypeVoiceChanger/Effects">these</a> from an early version of my Skype Voice Changer application, where I took a bunch of JSFX effects and ported them to C#.</p>
<img src="http://feeds.feedburner.com/~r/markdotnet/~4/oiLZJAslNnQ" height="1" width="1" alt=""/>https://markheath.net/post/limit-audio-naudiohttps://markheath.net/post/durable-functions-manage-historyManaging Durable Functions Orchestration History2018-12-17T00:00:00Z2018-12-17T00:00:00ZMark Heathtest@example.com<p>I had the privilege of <a href="https://www.meetup.com/DeveloperSouthCoast/events/255191019/">speaking about Durable Functions</a> at the Developer South Coast user group a week ago, and I took the opportunity to update my <a href="https://github.com/markheath/durable-functions-ecommerce-sample">Durable Functions e-Commerce sample app</a> to take advantage of some new features that have recently been added to Durable Functions.</p>
<h3>Orchestrator History</h3>
<p>One of the great things about <a href="https://docs.microsoft.com/en-gb/azure/azure-functions/durable/durable-functions-overview">Durable Functions</a> is that the history of each orchestration is stored using an &quot;event sourcing&quot; technique, meaning that it is possible to get a very detailed log of exactly what happened.</p>
<p>In particular you can discover what the input and output data of the orchestrator was, as well as the input and output data of every single activity function or sub-orchestrator that was called along the way. You can access the status of an orchestrator by calling the <a href="https://docs.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-http-api#get-instance-status">Get Instance Status API</a> or using <a href="https://docs.microsoft.com/en-gb/azure/azure-functions/durable/durable-functions-instance-management#querying-instances">DurableOrchestrationClient.GetStatusAsync</a>.</p>
<p>All this information is brilliant for both troubleshooting and auditing purposes. But it does also raise a few questions.</p>
<p>First, if I'm a heavy user of Durable Functions, will my Task Hub fill up with vast amounts of historical data that I no longer need or want?</p>
<p>Second, how can I search back through history to find any failed orchestrations, or orchestrations that are still running but should have terminated by now?</p>
<h3>Enumerating Orchestrations</h3>
<p>The recent <a href="https://github.com/Azure/azure-functions-durable-extension/releases/tag/v1.7.0">Durable Functions 1.7.0 release</a> includes features that help with both those tasks. It builds on the existing <a href="https://docs.microsoft.com/en-gb/azure/azure-functions/durable/durable-functions-http-api#request-with-paging">get all instances API</a>, and adds <a href="https://docs.microsoft.com/en-gb/azure/azure-functions/durable/durable-functions-http-api#request-with-paging">paging capabilities</a>, which would be essential if a large number of historical orchestrations were present.</p>
<p>In my <a href="https://github.com/markheath/durable-functions-ecommerce-sample">Durable Functions e-Commerce sample app</a>, I have a web-page that uses the get all instances API to show all orchestrations started in the last two hours (which works well for my talks, as I only want to show orchestrations I create during the talk). I do this with <a href="https://docs.microsoft.com/en-gb/azure/azure-functions/durable/durable-functions-instance-management#querying-all-instances">DurableOrchestration.GetStatusAsync</a> passing in the start time, and all the orchestration statuses I'm interested in (which is all of them - this method could probably do with a simpler way of expressing that).</p>
<pre><code class="language-cs">var statuses = await client.GetStatusAsync(DateTime.Today.AddHours(-2.0), null,
Enum.GetValues(typeof(OrchestrationRuntimeStatus)).Cast&lt;OrchestrationRuntimeStatus&gt;()
);
</code></pre>
<h3>Purging Orchestration History</h3>
<p>There are several reasons why you might want to purge orchestration history. Maybe you have a strict data retention policy where you don't want to store data older than a certain age. Or maybe you just don't like the idea of your Task Hub filling up with millions of old orchestration history records that you no longer have any use for.</p>
<p>With <a href="https://github.com/Azure/azure-functions-durable-extension/releases/tag/v1.7.0">Durable Functions 1.7</a>, history can easily be purged using the new <a href="https://docs.microsoft.com/en-gb/azure/azure-functions/durable/durable-functions-instance-management#purge-instance-history">Purge Instance History API</a>, which allows you to delete either all history for a specific orchestration, or all orchestrations that ended after a specific time. Obviously, you should take care not to purge the history of in-progress orchestrations, or you will get errors when that orchestration attempts to progress to the next step.</p>
<p>In my <a href="https://github.com/markheath/durable-functions-ecommerce-sample">Durable Functions e-Commerce sample app</a>, I use <a href="https://azure.github.io/azure-functions-durable-extension/api/Microsoft.Azure.WebJobs.DurableOrchestrationClient.html#Microsoft_Azure_WebJobs_DurableOrchestrationClient_PurgeInstanceHistoryAsync_">DurableOrchestrationClient.PurgeInstanceHistoryAsync</a> to allow individual orchestrations to be deleted from my order management page. It's great for when I do a quick practice run before I give the talk and want to hide the resulting history from the UI.</p>
<pre><code class="language-cs">await client.PurgeInstanceHistoryAsync(order.OrchestrationId);
</code></pre>
<h3>Summary</h3>
<p>It's great to see that Durable Functions continues to improve. There are loads more new features I've not mentioned, so do check out the <a href="https://github.com/Azure/azure-functions-durable-extension/releases">release notes</a> for a full run-down of what's new.</p>
<p>But I'm particularly pleased with these new orchestration history managing APIs. I specifically <a href="https://github.com/Azure/azure-functions-durable-extension/issues/358">asked for them</a> and so it's great to see the open source community jump on this and implement my suggestions. These APIs were the one missing feature that I had been waiting for before feeling ready to introduce Durable Functions into one of the products I work on, so many thanks to @gled4er, @k-miyake, @TsuyoshiUshio and everyone else who helped bring these improvements to Durable Functions.</p>
<img src="http://feeds.feedburner.com/~r/markdotnet/~4/7Csr_7y1RoU" height="1" width="1" alt=""/>https://markheath.net/post/durable-functions-manage-historyhttps://markheath.net/post/resource-group-self-destructThis resource group will self destruct in 30 minutes2018-11-30T00:00:00Z2018-11-30T00:00:00ZMark Heathtest@example.com<p>I'm a huge fan of the <a href="https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest">Azure CLI</a> - I've <a href="https://markheath.net/category/azure%20cli">blogged about it</a> and created a Pluralsight course on <a href="https://pluralsight.pxf.io/c/1192349/424552/7490?u=www%2Epluralsight%2Ecom%2Fcourses%2Fazure-cli-getting-started">getting started with it</a>.</p>
<p>I often use the Azure CLI to quickly try out various Azure resources like Web Apps or Cosmos DB databases. After playing for a while with them, I then delete the resource group I've put them in to clean up and stop paying.</p>
<p>Deleting is especially important when you experiment with expensive resources like a multi-node Service Fabric or AKS cluster. Forgetting to clean up after yourself could be an expensive mistake.</p>
<p>Enter &quot;<a href="https://github.com/noelbundick/azure-cli-extension-noelbundick">Noel's grab bag of Azure CLI goodies</a>&quot;, an awesome extension to the Azure CLI created by <a href="https://noelbundick.com/">Noel Bundick</a> which adds a &quot;self-destruct&quot; mode along with a bunch of other handy functions.</p>
<h3>Installing the extension</h3>
<p>To install the extension, simply follow the <a href="https://github.com/noelbundick/azure-cli-extension-noelbundick">instructions on GitHub</a>, and use the <code>az extension add</code> command pointing at the latest version (0.0.12 at the time of writing this). You can then see it in the list of installed extensions with <code>az extension list</code></p>
<pre><code class="language-powershell"># to install v0.0.12:
az extension add --source https://github.com/noelbundick/azure-cli-extension-noelbundick/releases/download/v0.0.12/noelbundick-0.0.12-py2.py3-none-any.whl
# to see the list of installed extensions
az extension list -o table
</code></pre>
<p>There is a one-time setup action needed for self-destruct, which will create a service principal with contributor rights that is used by the logic app that implements the self-destruct action.</p>
<pre><code class="language-powershell">az self-destruct configure
# OUTPUT (no these are not my real credentials!):
# Creating a service principal with `Contributor` rights over the entire subscription
# Retrying role assignment creation: 1/36
# {
# &quot;client-id&quot;: &quot;c9e0fb8e-18d2-44bd-b0bc-52056965a362&quot;,
# &quot;client-secret&quot;: &quot;0dbcece7-34c5-49fe-ac2e-dbab9cb310e1&quot;,
# &quot;tenant-id&quot;: &quot;fc3d0620-79f6-4b16-80b4-3b486a33514e&quot;
# }
</code></pre>
<h3>Using self-destruct mode</h3>
<p>To use self-destruct mode, you simply specify the <code>--self-destruct</code> flag on any resource you create with <code>az &lt;whatever&gt; create</code>. A good level to set this at is a resource group so you can create multiple resources that will get deleted together.</p>
<p>In this example, I'm creating a resource group called <code>experiment</code> that will self-destruct in 30 minutes, and then putting an App Service Plan inside it so there is something to be deleted inside the group.</p>
<pre><code class="language-powershell">$resGroup = &quot;experiment&quot;
# can use 1d, 6h, 2h30m etc
az group create -n $resGroup -l westeurope --self-destruct 30m
# create something to get deleted
az appservice plan create -g $resGroup -n TempPlan --sku B1
</code></pre>
<p>Note that the extension will tag the resources you create with a <code>self-destruct-date</code> tag.</p>
<p>If we look inside our resource group, we'll see that not only is there the app service plan we created, but a Logic App. This Logic App exists solely to implement the self-destruct and is even able to delete itself when it's done which is convenient.</p>
<pre><code class="language-powershell"># see what's in the resource group (there will be logic app
az resource list -g $resGroup -o table
# Name ResourceGroup Location Type Status
# -------------------------------------------------- --------------- ---------- ------------------------- --------
# self-destruct-resourceGroups-experiment-experiment experiment westeurope Microsoft.Logic/workflows
# TempPlan experiment westeurope Microsoft.Web/serverFarms
</code></pre>
<p>If you want to, you can explore the Logic App in the Azure portal to see how it works
<img src="https://markheath.net/posts/2018/resource-group-self-destruct-1.png" alt="Logic App" /></p>
<h3>See it in action</h3>
<p>To see what resources are scheduled for self-destruct, you can use the <code>az self-destruct list</code> command:</p>
<pre><code class="language-powershell">az self-destruct list -o table
# Date Name ResourceGroup
# -------------------------- ---------- ---------------
# 2018-11-30 13:12:42.750344 experiment experiment
</code></pre>
<p>If you've changed your mind you can disarm self-destruct mode with <code>az self-destruct disarm</code> or re-enable it later with a different duration using <code>az self-destruct arm</code></p>
<p>Finally, once the timer has expired, you can check whether it worked by searching for resources in the group. If all went well, there'll be nothing to see:</p>
<pre><code class="language-powershell">az resource list -g $resGroup -o table
# Resource group 'experiment' could not be found.
</code></pre>
<h3>Summary</h3>
<p>The self-destruct mode extension is a great way of protecting yourself against expensive mistakes and worth considering using for all short-lived experiments. It's a superb idea, and nicely executed. The idea could be developed further, for example it could email you asking if you are still using a resource group and if you don't respond within a set period of time it deletes it, to make a sort of <a href="https://en.wikipedia.org/wiki/Dead_man%27s_switch">&quot;dead man's switch&quot;</a> for Azure.</p>
<img src="http://feeds.feedburner.com/~r/markdotnet/~4/B3h4YJEWJUE" height="1" width="1" alt=""/>https://markheath.net/post/resource-group-self-destruct